MIT News - History of science MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community. en Tue, 03 Mar 2020 00:00:00 -0500 The case for economics — by the numbers A multidecade study shows economics increasingly overlaps with other disciplines, and has become more empirical in nature. Tue, 03 Mar 2020 00:00:00 -0500 Peter Dizikes | MIT News Office <p>In recent years, criticism has been levelled at economics for being insular and unconcerned about real-world problems. But a new study led by MIT scholars finds the field increasingly overlaps with the work of other disciplines, and, in a related development, has become more empirical and data-driven, while producing less work of pure theory.</p> <p>The study examines 140,000 economics papers published over a 45-year span, from 1970 to 2015, tallying the “extramural” citations that economics papers received in 16 other academic fields — ranging from other social sciences such as sociology to medicine and public health. In seven of those fields, economics is the social science most likely to be cited, and it is virtually tied for first in citations in another two disciplines.</p> <p>In psychology journals, for instance, citations of economics papers have more than doubled since 2000. Public health papers now cite economics work twice as often as they did 10 years ago, and citations of economics research in fields from operations research to computer science have risen sharply as well.</p> <p>While citations of economics papers in the field of finance have risen slightly in the last two decades, that rate of growth is no higher than it is in many other fields, and the overall interaction between economics and finance has not changed much. That suggests economics has not been unusually oriented toward finance issues — as some critics have claimed since the banking-sector crash of 2007-2008. And the study’s authors contend that as economics becomes more empirical, it is less dogmatic.</p> <p>“If you ask me, economics has never been better,” says Josh Angrist, an MIT economist who led the study. “It’s never been more useful. It’s never been more scientific and more evidence-based.”</p> <p>Indeed, the proportion of economics papers based on empirical work — as opposed to theory or methodology — cited in top journals within the field has risen by roughly 20 percentage points since 1990.</p> <p>The paper, “Inside Job or Deep Impact? Extramural Citations and the Influence of Economic Scholarship,” appears in this month’s issue of the <em>Journal of Economic Literature</em>.</p> <p>The co-authors are Angrist, who is the Ford Professor of Economics in MIT Department of Economics; Pierre Azoulay, the International Programs Professor of Management at the MIT Sloan School of Management; Glenn Ellison, the Gregory K. Palm Professor Economics and associate head of the Department of Economics; Ryan Hill, a doctoral candidate in MIT’s Department of Economics; and Susan Feng Lu, an associate professor of management in Purdue University’s Krannert School of Management.</p> <p><strong>Taking critics seriously</strong></p> <p>As Angrist acknowledges, one impetus for the study was the wave of criticism the economics profession has faced over the last decade, after the banking crisis and the “Great Recession” of 2008-2009, which included the finance-sector crash of 2008. The paper’s title alludes to the film “Inside Job” — whose thesis holds that, as Angrist puts it, “economics scholarship as an academic enterprise was captured somehow by finance, and that academic economists should therefore be blamed for the Great Recession.”</p> <p>To conduct the study, the researchers used the Web of Science, a comprehensive bibliographic database, to examine citations between 1970 and 2015. The scholars developed machine-learning techniques to classify economics papers into subfields (such as macroeconomics or industrial organization) and by research “style” —&nbsp; meaning whether papers are primarily concerned with economic theory, empirical analysis, or econometric methods.</p> <p>“We did a lot of fine-tuning of that,” says Hill, noting that for a study of this size, a machine-learning approach is a necessity.</p> <p>The study also details the relationship between economics and four additional social science disciplines: anthropology, political science, psychology, and sociology. Among these, political science has overtaken sociology as the discipline most engaged with economics. Psychology papers now cite economics research about as often as they cite works of sociology.</p> <p>The new intellectual connectivity between economics and psychology appears to be a product of the growth of behavioral economics, which examines the irrational, short-sighted financial decision-making of individuals — a different paradigm than the assumptions about rational decision-making found in neoclassical economics. During the study’s entire time period, one of the economics papers cited most often by other disciplines is the classic article “Prospect Theory: An Analysis of Decision under Risk,” by behavioral economists Daniel Kahneman and Amos Tversky.</p> <p>Beyond the social sciences, other academic disciplines for which the researchers studied the influence of economics include four classic business fields — accounting, finance, management, and marketing — as well as computer science, mathematics, medicine, operations research, physics, public health, and statistics.</p> <p>The researchers believe these “extramural” citations of economics are a good indicator of economics’ scientific value and relevance.</p> <p>“Economics is getting more citations from computer science and sociology, political science, and psychology, but we also see fields like public health and medicine starting to cite economics papers,” Angrist says. “The empirical share of the economics publication output is growing. That’s a fairly marked change. But even more dramatic is the proportion of citations that flow to empirical work.”</p> <p>Ellison emphasizes that because other disciplines are citing empirical economics more often, it shows that the growth of empirical research in economics is not just a self-reinforcing change, in which scholars chase trendy ideas. Instead, he notes, economists are producing broadly useful empirical research. &nbsp;</p> <p>“Political scientists would feel totally free to ignore what economists were writing if what economists were writing today wasn’t of interest to them,” Ellison says. “But we’ve had this big shift in what we do, and other disciplines are showing their interest.”</p> <p>It may also be that the empirical methods used in economics now more closely match those in other disciplines as well.</p> <p>“What’s new is that economics is producing more accessible empirical work,” Hill says. “Our methods are becoming more similar … through randomized controlled trials, lab experiments, and other experimental approaches.”</p> <p>But as the scholars note, there are exceptions to the general pattern in which greater empiricism in economics corresponds to greater interest from other fields. Computer science and operations research papers, which increasingly cite economists’ research, are mostly interested in the theory side of economics. And the growing overlap between psychology and economics involves a mix of theory and data-driven work.</p> <p><strong>In a big country</strong></p> <p>Angrist says he hopes the paper will help journalists and the general public appreciate how varied economics research is.</p> <p>“To talk about economics is sort of like talking about [the United States of] America,” Angrist says. “America is a big, diverse country, and economics scholarship is a big, diverse enterprise, with many fields.”</p> <p>He adds: “I think economics is incredibly eclectic.”</p> <p>Ellison emphasizes this point as well, observing that the sheer breadth of the discipline gives economics the ability to have an impact in so many other fields. &nbsp;</p> <p>“It really seems to be the diversity of economics that makes it do well in influencing other fields,” Ellison says. “Operations research, computer science, and psychology are paying a lot of attention to economic theory. Sociologists are paying a lot of attention to labor economics, marketing and management are paying attention to industrial organization, statisticians are paying attention to econometrics, and the public health people are paying attention to health economics. Just about everything in economics is influential somewhere.”</p> <p>For his part, Angrist notes that he is a biased observer: He is a dedicated empiricist and a leading practitioner of research that uses quasiexperimental methods. His studies leverage circumstances in which, say, policy changes random assignments in civic life allow researchers to study two otherwise similar groups of people separated by one thing, such as access to health care.</p> <p>Angrist was also a graduate-school advisor of Esther Duflo PhD ’99, who won the Nobel Prize in economics last fall, along with MIT’s Abhijit Banerjee — and Duflo thanked Angrist at their Nobel press conference, citing his methodological influence on her work. Duflo and Banerjee, as co-founders of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL), are advocates of using field experiments in economics, which is still another way of producing empirical results with policy implications.</p> <p>“More and more of our empirical work is worth paying attention to, and people do increasingly pay attention to it,” Angrist says. “At the same time, economists are much less inward-looking than they used to be.”</p> A new study examines 140,000 economics papers published from 1970 to 2015, tallying the “extramural” citations that economics papers received in 16 other academic fields, including sociology, medicine, and public health.Image: Christine Daniloff, MITEconomics, Sloan School of Management, School of Humanities Arts and Social Sciences, Research, History of science, Social sciences MIT art installation aims to empower a more discerning public With “In Event of Moon Disaster,” the MIT Center for Advanced Virtuality aims to educate the public on deepfakes with an alternative history of the moon landing. Mon, 25 Nov 2019 11:30:01 -0500 Suzanne Day | MIT Open Learning <p>Videos doctored by artificial intelligence, culturally known as “deepfakes,” are being created and shared by the public at an alarming rate. Using advanced computer graphics and audio processing to realistically emulate speech and mannerisms, deepfakes have the power to distort reality, erode truth, and spread misinformation. In a troubling example, researchers around the world have sounded the alarm that they carry significant potential to influence American voters in the 2020 elections.&nbsp;</p> <p>While technology companies race to develop ways to detect and control deepfakes on social media platforms, and lawmakers search for ways to regulate them, a team of artists and computer scientists led by the MIT Center for Advanced Virtuality have designed an art installation to empower and educate the public on how to discern reality from deepfakes on their own.</p> <p>“Computer-based misinformation is a global challenge,” says Fox Harrell, professor of digital media and of artificial intelligence at MIT and director of the MIT Center for Advanced Virtuality. “We are galvanized to make a broad impact on the literacy of the public, and we are committed to using AI not for misinformation, but for truth. We are pleased to bring onboard people such as our new XR Creative Director Francesca Panetta to help further this mission.”</p> <p>Panetta is the director of “In Event of Moon Disaster,” along with co-director Halsey Burgund, a fellow in the MIT Open Documentary Lab. She says, “We hope that our work will spark critical awareness among the public. We want them to be alert to what is possible with today’s technology, to explore their own susceptibility, and to be ready to question what they see and hear as we enter a future fraught with challenges over the question of truth.”</p> <p>With “In Event of Moon Disaster,” which opened Friday at the International Documentary Festival Amsterdam, the team has reimagined the story of the moon landing. Installed in a 1960s-era living room, audiences are invited to sit on vintage furniture surrounded by three screens, including a vintage television set. The screens play an edited array of vintage footage from NASA, taking the audience on a journey from takeoff into space and to the moon. Then, on the center television, Richard Nixon reads a contingency speech written for him by his speech writer, Bill Safire, “in event of moon disaster” which he was to read if the Apollo 11 astronauts had not been able to return to Earth. In this installation, Richard Nixon reads this speech from the Oval Office.</p> <div class="cms-placeholder-content-video"></div> <p>To recreate this moving elegy that never happened, the team used deep learning techniques and the contributions of a voice actor to build the voice of Richard Nixon, producing a synthetic speech working with the Ukranian-based company Respeecher. They also worked with Israeli company Canny AI to use video dialogue replacement techniques to study and replicate the movement of Nixon’s mouth and lips, making it look as though he is reading this very speech from the Oval Office. The resulting video is highly believable, highlighting the possibilities of deepfake technology today.</p> <p>The researchers chose to create a deepfake of this historical moment for a number of reasons: Space is a widely loved topic, so potentially engaging to a wide audience; the piece is apolitical and less likely to alienate, unlike a lot of misinformation; and, as the 1969 moon landing is an event widely accepted by the general public to have taken place, the deepfake elements will be starkly obvious.&nbsp;</p> <p>Rounding out the educational experience, “In Event of Moon Disaster” transparently provides information regarding what is possible with today’s technology, and the goal of increasing public awareness and ability to identify misinformation in the form of deepfakes. This will be in the form of newspapers written especially for the exhibit which detail the making of the installation, how to spot a deepfake, and the most current work being done in algorithmic detection. Audience participants will be encouraged to take this away.</p> <p>"Our goal was to use the most advanced artificial intelligence techniques available today to create the most believable result possible — and then point to it and say, ‘This is fake; here’s how we did it; and here’s why we did it,’” says Burgund.</p> <p>While the physical installation opens in November 2019 in Amsterdam, the team is building a web-based version that is expected to go live in spring 2020.</p> "In Event of Moon Disaster" reimagines the story of the first moon landing as if the Apollo 11 astronauts had not been able to return to Earth. It was created to highlight the concern about computer-based misinformation, or "deepfakes."Photo: Chris BoebelOffice of Open Learning, Augmented and virtual reality, Machine learning, Artificial intelligence, History, Space exploration, Film and Television, Arts, Computer Science and Artificial Intelligence Laboratory (CSAIL), Comparative Media Studies/Writing, NASA, Computer science and technology, Technology and society, History of science, School of Engineering, School of Humanities Arts and Social Sciences PhD student Marc Aidinoff explores how technology impacts public policy Historian&#039;s research focuses on understanding how visions for social and economic policy are tied to changing ideas about technology. Mon, 18 Nov 2019 14:50:01 -0500 School of Humanities, Arts, and Social Sciences <p>“Computers have encapsulated so many collective hopes and fears for the future,” says Marc Aidinoff, a PhD candidate in History/Anthropology/Science, Technology, and Society (HASTS), a doctoral program that draws on the expertise of three fields in MIT's School of Humanities, Arts, and Social Sciences (SHASS).</p> <p>“In the 1990s, you have Vice President Gore, President Clinton, and the Rev. Jesse Jackson saying that closing the digital divide was a fundamental civil rights issue of our times. What does it mean when civil rights become about access to computers and the internet? When lack of internet access is considered a form of poverty? These are really big questions and I haven’t been able to get them out of my system.”</p> <p><strong>How is social policy tied to ideas about technology?</strong></p> <p>Aidinoff has become dedicated to understanding how policymakers have thought about technology. It makes sense. After graduating from Harvard University, Aidinoff worked for Barack Obama's presidential campaign and subsequently joined the administration working as a policy advisor for three years — including a two-year stint as the assistant policy director for Vice President Joe Biden.</p> <p>“But these questions were getting under my skin,” Aidinoff explains. “I wanted to know how visions for social and economic policy were tied to changing ideas about technology. So I became a card-carrying historian who pokes around archives from Mississippi to D.C., trying to get answers.”</p> <p><strong>Restructuring the citizen’s relationship to the state</strong></p> <p>The story in Aidinoff’s dissertation project begins in 1984, with the breakup of the Bell System and the launch of the Macintosh computer. That was also the year the U.S. federal government began measuring citizens’ access to computers. The dissertation traces policies designed to democratize information and the implementation of massive systems built to digitize the U.S. government.</p> <p>“Networked computing,” Aidinoff argues, “has been part of a larger restructuring of the citizen’s relationship to the state in U.S. history. For example, when you see a welfare caseworker, and there is a computer on their desk — does it matter who wrote that software?”</p> <p>The Horowitz Foundation for Social Policy presented Aidinoff with its John Stanley Award for History and Ethics earlier this year to support his efforts and fund his research trips.</p> <p>Aidinoff’s research has sent him searching for some of the same types of information he reviewed and generated as a policy advisor. He lights up when talking about a visit to the George H. W. Bush Presidential Library and Museum in College Station, Texas, to examine a hodgepodge of materials from policy memos to computer manuals. These archives help him understand how information moved around the executive branch and how policymakers would have understood technological systems.</p> <p><strong>The archive you need</strong></p> <p>Reading through the documents he locates can be difficult, however; Aidinoff credits the HASTS program for sharpening his research skills so he can home in on what is essential.</p> <p>“The HASTS faculty are really good at teaching you how to be unsatisfied until you’ve figured out how to construct the archive that you think is right for the question you’re asking. For me, that has meant a lot of newspapers and computer manuals. There’s a real belief among historians of science and technology that you need to go out and construct the archive you need. Archives aren’t just things that are waiting for you to discover. You’re going to need to go out and be creative.”</p> <p>“HASTS pushed me harder than I expected. I knew MIT would be challenging, but my colleagues encouraged me to spend time in places where I was less comfortable, including rural Mississippi.”</p> <p><strong>The humanistic/technical synergy at MIT</strong></p> <p>In fact, Aidinoff spent a semester at the University of Mississippi and the most recent summers teaching college-bridge courses to high school students in the Mississippi Delta with the Freedom Summer Collegiate program — an organization that continues the work of the 1964 Freedom Summer.</p> <p>For Aidinoff, there is no question that SHASS is the best place to continue his studies. The combination of rich humanities research programs and surrounding science and technology expertise was exactly what he wanted.</p> <p>“You’ve got such amazing people, world-class historians and historians of science and technology. The people I get to work with in a small, loving, interdisciplinary department is pretty extraordinary. My friends are technical, and being technical is really valued. I hang out with computer scientists all the time, which is great. I couldn’t do what I do if I didn’t have people pushing back on me from a social science perspective and from a technical engineering perspective.”</p> <p>Aidinoff’s position with the MIT Computer Science and Artificial Intelligence Laboratory’s Internet Policy Research Initiative has complemented the perspective of his home department in SHASS.</p> <p><strong>Knowledge is social</strong><br /> <br /> “A key lesson from the history of science and technology is that knowledge is social. Some knowledge comes from sitting and thinking, and that’s important. But over and over again we learn it’s sitting and thinking and then going and having lunch or a coffee with people in your discipline and across disciplines.</p> <p>“I don’t think I’ll ever again be in a community with this many historians of science per square mile. It’s just an incredibly exciting community. And it’s social. We think these questions really matter, so it’s worth looking up from the book, too, and having the discussion where you fight about them because these are real live questions with political consequences.”</p> <p><br /> <span style="font-size:12px;"><em>Story prepared by SHASS Communications<br /> Writer, photographer: Maria Iacobo </em></span></p> "What does it mean," Aidinoff asks "when civil rights become about access to computers and the internet? When lack of internet access is considered a form of poverty? These questions were getting under my skin," he says. "I wanted to know how social and economic policy were tied to changing ideas about technology."Photo: Maria IacoboSchool of Humanities Arts and Social Sciences, Computer Science and Artificial Intelligence Laboratory (CSAIL), Students, Anthropology, History, History of science, Policy, Civil rights, Program in STS, Computer science and technology, graduate, Graduate, postdoctoral Historian of the hinterlands In overlooked spots on the map, MIT Professor Kate Brown examines the turbulence of the modern world. Tue, 12 Nov 2019 23:59:59 -0500 Peter Dizikes | MIT News Office <p>History can help us face hard truths. The places Kate Brown studies are particularly full of them. &nbsp;</p> <p>Brown, a historian in MIT’s Program in Science, Technology, and Society, has made a career out of studying what she calls “modernist wastelands” — areas suffering after years of warfare, social conflict, and even radioactive fallout from atomic accidents.&nbsp;</p> <p>Brown has spent years conducting research in the former Soviet Union, often returning to a large region stretching across the Poland-Ukraine border, which has been beset by two world wars, ethnic cleansing, purges, famine, and changes in power. It’s the setting for her acclaimed first book, “A Biography of No Place” (2004), a chronicle of the region’s conflicts and their consequences.</p> <p>The same region includes the site of the Chernobyl nuclear-reactor explosion, subject of Brown’s fourth and most recent book, “Manual for Survival: A Chernobyl Guide to the Future” (2019), which uncovers extensive new evidence about the effects of the disaster on the area and its people.&nbsp;</p> <p>“Progress [often] occurs in big capitals, but if you go to the hinterlands, you see what’s left in the wake of progress, and it’s usually a lot of destruction,” says Brown, speaking of areas that have suffered due to technological or economic changes.&nbsp;&nbsp;</p> <p>That does not apply only to the former Soviet Union and its former satellite states, to be sure. Brown, who considers herself an transnational historian, is also the author of 2013’s “Plutopia,” reconstructing life in and around the plutonium-producing plants in Richland, Washington, and Ozersk, Russia, which have both left a legacy of nuclear contamination.</p> <p>With a record of innovative and award-winning research over more than two decades in academia, Brown joined MIT with tenure, as a professor of science, technology, and society, in early 2019.</p> <p><strong>When “no place” is like home</strong></p> <p>The lesson that life can be tough in less-glamorous locales is one Brown says she learned early on. Brown grew up in Elgin, Illinois, once headquarters of the famous Elgin National Watch Company — although that changed.</p> <p>“The year I was born, 1965, the Elgin watch factory was shuttered, and they blew up the watch tower,” Brown says. “It was a company town, and that was the main business. I grew up watching the supporting businesses close, and then regular clothing stores and grocery stores went bankrupt.”</p> <p>And while the changes in Elgin were very different (and less severe) than those in the places she has studied professionally, Brown believes her hometown milieu has shaped her work.</p> <p>“It was nothing near what I describe in wartime Ukraine, or Chernobyl, or one of plutonium plants, but I finally realized I was so interested in modernist wastelands because of my own background,” Brown says.</p> <p>Indeed, Brown notes, her mother moved four times in her life because of the “deindustrialized landscape,” from places like Aliquippa, Pennsylvania, and Detroit. And her parents, she says, “moved to Elgin thinking it was healthy, small-town America. So how many times do they have to jump? … What if you care about your family and community? What if you’re loyal?”</p> <p>As it happens, part of the direct impetus for Brown’s career came from her mother. One day in the 1980s, Brown recalls, she was talking to her parents and criticizing the superficial culture surrounding U.S.-Soviet relations. To which Brown’s mother responded, “Do something about it. Study Russian, change the world.”</p> <p>As an undergraduate at the University of Wisconsin, Brown soon “took everything Russian, Russian lit and translation, grammar, history, politics, and I just got hooked. Then I thought I should go study there.” In 1987, she spent a year abroad in Leningrad (now St. Petersburg). After graduating, Brown worked for a study-abroad program in the Soviet Union for three more years, helping students troubleshoot “pretty major problems, with housing and food and medical care,” as well as some cases where students had run afoul of Soviet authorities.&nbsp;</p> <p>Returning to the U.S., Brown entered the graduate program in history at the University of Washington while working as a journalist. She kept returning to the Ukraine borderlands region, collecting archival and observational material, and writing it up, for her dissertation “in the narrative mode of a first-person travelogue.”</p> <p>That did not fit the model of a typical PhD thesis. But Richard White, a prominent American historian with an openness toward innovative work, who was then at the University of Washington, advocated to keep the form of Brown’s work largely intact. She received her PhD, and more: Her thesis formed the basis of “A Biography of No Place,” which won the George Louis Beer Prize for International European History from the American Historical Association (AHA). Brown joined the faculty at the University of Maryland at Baltimore County before joining MIT.</p> <p><strong>A treasure island for research</strong></p> <p>In all of Brown’s books, a significant portion of the work, a bit atypically for academia, has continued to incorporate first-person material about her travels, experiences, and research, something she also regards as crucial.</p> <p>“Because these places are rarely visited, they’re hard to imagine for the readers,” Brown says. “That puts me in the narrative, though not for all of it.”</p> <p>Brown’s approach to history is also highly archival: She has unearthed key documents in all manner of local, regional, and national repositories. When she entered the profession, in the 1990s, many Soviet archives were just opening up, providing some rich opportunities for original research.&nbsp;</p> <p>“It’s amazing,” Brown says. “Over and over again I’ve been one of the first persons to walk into an archive and see what’s there. And that is just sort of a treasure island quality of historical research. Being a Soviet historian in the early 1990s, there was nothing else like it.”</p> <p>The archives continue to be profitable for Brown, yielding some of her key new insights in “Manual for Survival.” In assessing Chernobyl, Brown shows, local and regional studies of the disaster’s effects were often extensive and candid, but the official record became sanitized as it moved up the Soviet bureaucratic hierarchy.</p> <p>Brown’s combination of approaches to writing history has certainly produced extensive professional success. “Plutopia” was awarded the AHA’s Albert J. Beveridge and John H. Dunning prizes as the best book in American history and the Organization of American Historians’ Ellis H. Hawley Award, among others. Brown has also received Guggenheim Foundation and Carnegie Foundation fellowships.</p> <p>Brown is currently working on a new research project, examining overlooked forms of human knowledge about plants and the natural environment. She notes that there are many types of “indigenous knowledge and practices we have missed or rejected,” which could foster a more sustainable relationship between human society and the environment.</p> <p>It is a different type of topic than Brown’s previous work, although, like her other projects, this one recognizes that we have spent too long mishandling the environment, rather than prioritizing its care — another hard truth to consider.</p> Kate Brown is a professor in MIT's Program in Science, Technology, and Society.Image: Allegra BovermanSchool of Humanities Arts and Social Sciences, Faculty, Profile, Technology and society, Energy, History, Program in STS, Nuclear power and reactors, History of science, Science communications Meet Carolyn Stein: Researching the economics of science MIT PhD student explores the impact of scientists being &quot;scooped&quot; when a competing research team publishes results first, a concern for many disciplines. Mon, 23 Sep 2019 09:00:00 -0400 School of Humanities, Arts, and Social Sciences <p>Carolyn Stein says she’s not a morning person. And yet …</p> <p>“All of a sudden I’m going on bike rides with people that leave at 5:30 a.m.,” she says, shaking her head in surprise.</p> <p>Such is the appeal of MIT Cycling Club for Stein, a doctoral student in MIT’s Department of Economics, located within the School of Humanities, Arts, and Social Sciences. After inheriting an old road bike last year she has been shifting gears, literally and figuratively.</p> <p>“It’s a wonderful thing to have happened and it's how I’ve met people across the institute,” Stein says.</p> <p>After graduating from Harvard University with degrees in applied mathematics and economics, Stein worked for a Boston hedge fund for two years. Upon arriving at MIT, she planned to study labor economics and explore why some people reach their potential in the labor force while others do not. But before long, Stein had decided to shift her area of research to the economics of science.</p> <p><strong>The economics of science</strong></p> <p>“The focus on science was influenced by one of my advisers, Professor Heidi Williams," she says, "and also just by being at MIT surrounded by people who do science all the time. I’ve been learning what an interesting and difficult career path science is. On its surface, academic science is different from other jobs that economists typically study. For one, scientists are often motivated by factors other than wages.<br /> <br /> “But many insights from labor economics can still help us understand how the field of science functions. Incentive and career concerns still matter. And risk is a big concern in science. You could have a very good idea, but get scooped. That can derail a scientist, and a whole year’s worth of work could be lost. That's where this research idea began.”<br /> <br /> Stein and her research partner, Ryan Hill, also a doctoral student in the MIT economics department, are working on two projects simultaneously, both of which focus on the careers of scientists and the incentives they face. Their first paper explores what happens when a scientist is “scooped” or, in other words, what happens to scientists when a competing research team publishes their results first. It’s a concern that resonates with researchers across many disciplines.<br /> <br /> <strong>The impact of being scooped</strong></p> <p>“Economists often worry that while we’re working on something we’re going to flip open a journal and see that someone else has already written the same paper,” Stein says. “This is an even bigger deal in science. In our project, we're studying a particular field of structural biology where we can actually look at data at the level of proteins and find cases where two scientists are simultaneously trying to solve the structure of the same protein.<br /> <br /> “But one person gets there first and publishes. We’re trying to learn what happens to the other scientist, who has been scooped. Are they still able to publish? Do they get published in a lower-ranked journal, or receive fewer citations? Anecdotally, scientists say they’re very stressed about being scooped, so we’re trying to measure how much they’re penalized, if they are.”<br /> <br /> <strong>The tension between quality and competition</strong></p> <p>Stein's and Hill's second paper examines the tradeoff between competition and quality in science. If competition is fierce and scientists are working overtime to get their work done sooner, the science may progress faster, Stein reasons. But if the fear of being scooped is high, scientists may decide to publish early. As a result, the work may not be as thorough.<br /> <br /> “In that case, we miss out on the highest quality work these scientists could produce,” Stein says. “You’re looking at a trade-off. Competition means that science progresses faster, but corners may have been cut. How we as a society should feel about this probably depends on the balance of that trade-off. That’s the tension that we’re trying to explore.”<br /> <br /> <strong>Work that resonates</strong></p> <p>After several years working and studying at MIT, Stein is now excited to see how things have coalesced: Her research topic has received positive feedback from the MIT community; she’s “super happy” with her advisers — professors Heidi Williams and Amy Finkelstein in the Department of Economics, and Pierre Azoulay, a professor of management in the MIT Sloan School of Management — and collaborating with Hill has “made the whole experience much more fun and companionable. (Williams, who continues to serve as Stein's adviser, is now on the faculty of Stanford University.)<br /> <br /> “I want to do things that resonate with people inside and outside the economics field," Stein reflects. "A really rewarding part of this project has been talking to people who do science and asking them if our work resonates with them. Having scientists completely understand what we’re talking about is a huge part of the fun for me.”<br /> <br /> Another activity Stein is enthusiastic about is her teaching experience with professors Williams and David Autor, which has affirmed her interest in an academic career. “I find teaching incredibly gratifying,” Stein says. “And I’ve had the privilege of being a teaching assistant here for professors who care a great deal about teaching.”<br /> <br /> <strong>Women in economics</strong></p> <p>Stein would also like to encourage more women to explore a career in economics. She notes that if you were to poll students in their first year, they would likely say that economics is about what they read in <em>The Wall Street Journal:</em> finance, international trade, and money.<br /> <br /> “But it’s much more than that,” Stein says. “Economics is more like a set of tools that you can apply to an astonishingly wide variety of things. I think that if more people knew this, and knew it sooner in their college career, a much more diverse group of people would want to study the field.”<br /> <br /> Career options in the private sector are also increasing for economists, she says. “A lot of tech companies now realize they love economics PhDs. These companies collect so much data. It’s an opportunity to actually do a job that uses your degree.”<br /> <br /> <strong>A sport with data</strong></p> <p>As the 2019 fall academic term gets underway, Stein is focused on writing her thesis and preparing for the academic job market. To explore her native New England as well as to escape the rigors of thesis-writing, she’s also looking forward to rides with the MIT Cycling Club.</p> <p>“A few weekends ago," she says, "we drove up to Vermont to do this completely insane ride over six mountain passes. The club is such a wonderful group of people. And cycling can be a very nerdy sport with tons of data to analyze.”</p> <p>So, maybe not a total escape.<br /> &nbsp;</p> <h5><em>Story by MIT SHASS Communications<br /> Editorial Team: Emily Hiestand and Maria Iacobo </em></h5> "Scientists are often motivated by factors other than wages,” says Carolyn Stein, "but many insights from labor economics still help us understand how the field of science functions. Incentive and career concerns still matter. And risk is a big concern.”Photo: Maria Iacobo Economics, School of Humanities Arts and Social Sciences, Social sciences, Profile, Women, History of science, Data, Analytics, Behavioral economics, Students, Graduate, postdoctoral New science blooms after star researchers die, study finds Deaths of prominent life scientists tend to be followed by a surge in highly cited research by newcomers. Thu, 29 Aug 2019 00:00:01 -0400 Peter Dizikes | MIT News Office <p>The famed quantum physicist Max Planck had an idiosyncratic view about what spurred scientific progress: death. That is, Planck thought, new concepts generally take hold after older scientists with entrenched ideas vanish from the discipline.</p> <p>“A great scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it,” Planck once wrote.</p> <p>Now a new study co-authored by MIT economist Pierre Azoulay, an expert on the dynamics of scientific research, concludes that Planck was right. In many areas of the life sciences, at least, the deaths of prominent researchers are often followed by a surge in highly cited research by newcomers to those fields.</p> <p>Indeed, when star scientists die, their subfields see a subsequent 8.6 percent increase, on average, of articles by researchers who have not previously collaborated with those star scientists. Moreover, those papers published by the newcomers to these fields are much more likely to be influential and highly cited than other pieces of research.</p> <p>“The conclusion of this paper is not that stars are bad,” says Azoulay, who has co-authored a new paper detailing the study’s findings. “It’s just that, once safely ensconsed at the top of their fields, maybe they tend to overstay their welcome.”</p> <p>The paper, “Does Science Advance one Funeral at a Time?” is co-authored by Azoulay, the International Programs Professor of Management at the MIT Sloan School of Management; Christian Fons-Rosen, an assistant professor of economics at the University of California at Merced; and Joshua Graff Zivin, a professor of economics at the University of California at San Diego and faculty member in the university’s School of Global Policy and Strategy. It is forthcoming in the <em>American Economic Review</em>.</p> <p>To conduct the study, the researchers used a database of life scientists that Azoulay and Graff Zivin have been building for well over a decade. In it, the researchers chart the careers of life scientists, looking at accomplishments that include funding awards, published papers and the citations of those papers, and patent statistics.</p> <p>In this case, Azoulay, Graff Zivin, and Fons-Rosen studied what occurred after the unexpected deaths of 452 life scientists, who were still active in their disciplines. In addition to the 8.6 percent increase in papers by new entrants to those subfields, there was a 20.7 percent decrease in papers by the rather smaller number of scientists who had previously co-authored papers with the star scientists.</p> <p>Overall, Azoulay notes, the study provides a window into the power structures of scientific disciplines. Even if well-established scientists are not intentionally blocking the work of researchers with alternate ideas, a group of tightly connected colleagues may wield considerable influence over journals and grant awards. In those cases, “it’s going to be harder for those outsiders to make a mark on the domain,” Azoulay notes.</p> <p>“The fact that if you’re successful, you get to set the intellectual agenda of your field, that is part of the incentive system of science, and people do extraordinary positive things in the hope of getting to that position,” Azoulay notes. “It’s just that, once they get there, over time, maybe they tend to discount ‘foreign’ ideas too quickly and for too long.”</p> <p>Thus what the researchers call “Planck’s Principle” serves as an unexpected — and tragic — mechanism for diversifying bioscience research.</p> <p>The researchers note that in referencing Planck, they are extending his ideas to a slightly different setting than the one he himself was describing. In his writing, Planck was discussing the birth of quantum physics — the kind of epochal, paradigm-setting shift that rarely occurs in science. The current study, Azoulay notes, examines what happens in everyday “normal science,” in the phrase of philosopher Thomas Kuhn.</p> <p>The process of bringing new ideas into science, and then hanging on to them, is only to be expected in many areas of research, according to Azoulay. Today’s seemingly stodgy research veterans were once themselves innovators facing an old guard.</p> <p>“They had to hoist themselves atop the field in the first place, when presumably they were [fighting] the same thing,” Azoulay says. “It’s the circle of life.”</p> <p>Or, in this case, the circle of life science.</p> <p>The research received support from the National Science Foundation, the Spanish Ministry of Economy and Competitiveness, and the Severo Ochoa Programme for Centres of Excellence in R&amp;D.</p> A study co-authored by MIT professor Pierre Azoulay has shown that in many areas of the life sciences, the deaths of prominent researchers are often followed by a surge in highly cited research by newcomers to those fields.Sloan School of Management, Research, Faculty, Economics, History of science, National Science Foundation (NSF) Behind the scenes of the Apollo mission at MIT From making the lunar landings possible to interpreting the meaning of the moon rocks, the Institute was a vital part of history. Thu, 18 Jul 2019 09:23:27 -0400 David L. Chandler | MIT News Office <p>Fifty years ago this week, humanity made its first expedition to another world, when Apollo 11 touched down on the moon and two astronauts walked on its surface. That moment changed the world in ways that still reverberate today.</p> <p>MIT’s deep and varied connections to that epochal event — <a href="">many</a> of <a href="">which</a> have been <a href="">described</a> on <em>MIT News</em> — began years before the actual landing, when the MIT Instrumentation Laboratory (now Draper) signed the very first contract to be awarded for the Apollo program after its announcement by President John F. Kennedy in 1961. The Institute’s involvement continued throughout the program — and is still ongoing today.</p> <p>MIT’s role in creating the navigation and guidance system that got the mission to the moon and back has been widely recognized in books, movies, and television series. But many other aspects of the Institute’s involvement in the Apollo program and its legacy, including advances in mechanical and computational engineering, simulation technology, biomedical studies, and the geophysics of planet formation, have remained less celebrated.</p> <p>Amid the growing chorus of recollections in various media that have been appearing around this 50th anniversary, here is a small collection of bits and pieces about some of the unsung heroes and lesser-known facts from the Apollo program and MIT’s central role in it.</p> <p><strong>A new age in electronics</strong></p> <p>The computer system and its software that controlled the spacecraft — called the Apollo Guidance Computer and designed by the MIT Instrumentation Lab team under the leadership of Eldon Hall — were remarkable achievements that helped push technology forward in many ways.</p> <p>The AGC’s programs were written in one of the first-ever compiler languages, called MAC, which was developed by Instrumentation Lab engineer <a href="" target="_blank">Hal Laning</a>. The computer itself, the 1-cubic-foot Apollo Guidance Computer, was the first significant use of silicon integrated circuit chips and greatly accelerated the development of the microchip technology that has gone on to change virtually every consumer product.</p> <p>In an age when most computers took up entire climate-controlled rooms, the compact AGC was uniquely small and lightweight. But most of its “software” was actually hard-wired: The programs were woven, with tiny donut-shaped metal “cores” strung like beads along a set of wires, with a given wire passing outside the donut to represent a zero, or through the hole for a 1. These so-called rope memories were made in the Boston suburbs at Raytheon, mostly by women who had been hired because they had experience in the weaving industry. Once made, there was no way to change individual bits within the rope, so any change to the software required weaving a whole new rope, and last-minute changes were impossible.</p> <p>As David Mindell, the <em>Frances and David Dibner Professor of the History of Engineering and Manufacturing,</em> points out in his book “Digital Apollo,” that system represented the first time a computer of any kind had been used to control, in real-time, many functions of a vehicle carrying human beings — a trend that continues to accelerate as the world moves toward self-driving vehicles. Right after the Apollo successes, the AGC was directly adapted to an F-8 fighter jet, to create the first-ever fly-by-wire system for aircraft, where the plane’s control surfaces are moved via a computer rather than direct cables and hydraulic systems. This approach is now widespread in the aerospace industry, says John Tylko, who teaches MIT’s class <a href="" target="_blank">16.895J</a> (Engineering Apollo: The Moon Project as a Complex System), which is taught every other year.</p> <p>As sophisticated as the computer was for its time, computer users today would barely recognize it as such. Its keyboard and display screen looked more like those on a microwave oven than a computer: a simple numeric keypad and a few lines of five-digit luminous displays. Even the big mainframe computer used to test the code as it was being developed had no keyboard or monitor that the programmers ever saw. Programmers wrote their code by hand, then typed it onto punch cards — one card per line — and handed the deck of cards to a computer operator. The next day, the cards would be returned with a printout of the program’s output. And in this time long before email, communications among the team often relied on handwritten paper notes.</p> <p><strong>Priceless rocks</strong></p> <p>MIT’s involvement in the geophysical side of the Apollo program also extends back to the early planning stages — and continues today. For example, Professor Nafi Toksöz, an expert in seismology, helped to develop a seismic monitoring station that the astronauts placed on the moon, where it helped lead to a greater understanding of the moon’s structure and formation. “It was the hardest work I have ever done, but definitely the most exciting,” <a href="">he has said</a>.</p> <p>Toksöz says that the data from the Apollo seismometers “changed our understanding of the moon completely.” The seismic waves, which on Earth continue for a few minutes, lasted for two hours, which turned out to be the result of the moon’s extreme lack of water. “That was something we never expected, and had never seen,” he recalls.</p> <p>The first seismometer was placed on the moon’s surface very shortly after the astronauts landed, and seismologists including Toksöz started seeing the data right away — including every footstep the astronauts took on the surface. Even when the astronauts returned to the lander to sleep before the morning takeoff, the team could see that Buzz Aldrin&nbsp;ScD ’63 and Neil Armstrong were having a sleepless night, with every toss and turn dutifully recorded on the seismic traces.</p> <p>MIT Professor Gene Simmons was among the first group of scientists to gain access to the lunar samples as soon as NASA released them from quarantine, and he and others in what is now the Department of Earth, Planetary and Atmospheric Sciences (EAPS) have continued to work on these samples ever since. As part of a conference on campus, he exhibited some samples of lunar rock and soil in their first close-up display to the public, where some people may even have had a chance to touch the samples.</p> <p>Others in EAPS have also been studying those Apollo samples almost from the beginning. Timothy Grove, the Robert R. Shrock Professor of Earth and Planetary Sciences, started studying the Apollo samples in 1971 as a graduate student at Harvard University, and has been doing research on them ever since. Grove says that these samples have led to major new understandings of planetary formation processes that have helped us understand the Earth and other planets better as well.</p> <p>Among other findings, the rocks showed that ratios of the isotopes of oxygen and other elements in the moon rocks were identical to those in terrestrial rocks but completely different than those of any meteorites, proving that the Earth and the moon had a common origin and leading to the hypothesis that the moon was created through a giant impact from a planet-sized body. The rocks also showed that the entire surface of the moon had likely been molten at one time. The idea that a planetary body could be covered by an ocean of magma was a major surprise to geologists, Grove says.</p> <p>Many puzzles remain to this day, and the analysis of the rock and soil samples goes on. “There’s still a lot of exciting stuff” being found in these samples, Grove says.</p> <p><strong>Sorting out the facts</strong></p> <p>In the spate of publicity and new books, articles, and programs about Apollo, inevitably some of the facts — some trivial, some substantive — have been scrambled along the way. “There are some myths being advanced,” says Tylko, some of which he addresses in his “Engineering Apollo” class. “People tend to oversimplify” many aspects of the mission, he says.</p> <p>For example, many accounts have described the sequence of alarms that came from the guidance computer during the last four minutes of the mission, forcing mission controllers to make the daring decision to go ahead despite the unknown nature of the problem. But Don Eyles, one of the Instrumentation Lab’s programmers who had written the landing software for the AGC, says that he can’t think of a single account he’s read about that sequence of events that gets it entirely right. According to Eyles, many have claimed the problem was caused by the fact that the rendezvous radar switch had been left on, so that its data were overloading the computer and causing it to reboot.</p> <p>But Eyles says the actual reason was a much more complex sequence of events, including a crucial mismatch between two circuits that would only occur in rare circumstances and thus would have been hard to detect in testing, and a probably last-minute decion to put a vital switch in a position that allowed it to happen. Eyles has described these details in a memoir about the Apollo years and in a technical paper <a href="" target="_blank">available online</a>, but he says they are difficult to summarize simply. But he thinks the author Norman Mailer may have come closest, capturing the essence of it in his book “Of a Fire on the Moon,” where he describes the issue as caused by a “sneak circuit” and an “undetectable” error in the onboard checklist.</p> <p>Some accounts have described the AGC as a very limited and primitive computer compared to today’s average smartphone, and Tylko acknowledges that it had a tiny fraction of the power of today’s smart devices — but, he says, “that doesn’t mean they were unsophisticated.” While the AGC only had about 36 kilobytes of read-only memory and 2 kilobytes of random-access memory, “it was exceptionally sophisticated and made the best use of the resources available at the time,” he says.</p> <p>In some ways it was even ahead of its time, Tylko says. For example, the compiler language developed by Laning along with Ramon Alonso at the Instrumentation Lab used an architecture that he says was relatively intuitive and easy to interact with. Based on a system of “verbs” (actions to be performed) and “nouns” (data to be worked on), “it could probably have made its way into the architecture of PCs,” he says. “It’s an elegant interface based on the way humans think.”</p> <p>Some accounts go so far as to claim that the computer failed during the descent and astronaut Neil Armstrong had to take over the controls and land manually, but in fact partial manual control was always part of the plan, and the computer remained in ultimate control throughout the mission. None of the onboard computers ever malfunctioned through the entire Apollo program, according to astronaut David Scott SM ’62, who used the computer on two Apollo missions: “We never had a failure, and I think that is a remarkable achievement.”</p> <p><strong>Behind the scenes</strong></p> <p>At the peak of the program, a total of about 1,700 people at MIT’s Instrumentation Lab were working on the Apollo program’s software and hardware, according to Draper, the Instrumentation Lab’s successor, which spun off from MIT in 1973. A few of those, such as the near-legendary <a href="">“Doc” Draper</a> himself — Charles Stark Draper ’26, SM ’28, ScD ’38, former head of the Department of Aeronautics and Astronautics (AeroAstro) — have become widely known for their roles in the mission, but most did their work in near-anonymity, and many went on to entirely different kinds of work after the Apollo program’s end.</p> <p><a href="">Margaret Hamilton</a>, who directed the Instrumentation Lab’s Software Engineering Division, was little known outside of the program itself until an <a href="" target="_self">iconic photo</a> of her next to the original stacks of AGC code began making the rounds on social media in the mid 2010s. In 2016, when she was awarded the Presidential Medal of Freedom by President Barack Obama, MIT Professor Jaime Peraire, then head of AeroAstro, said of Hamilton that “She was a true software engineering pioneer, and it’s not hyperbole to say that she, and the Instrumentation Lab’s Software Engineering Division that she led, put us on the moon.” After Apollo, Hamilton went on to found a software services company, which she still leads.</p> <p>Many others who played major roles in that software and hardware development have also had their roles little recognized over the years. For example, Hal Laning ’40, PhD ’47, who developed the programming language for the AGC, also devised its executive operating system, which employed what was at the time a new way of handling multiple programs at once, by assigning each one a priority level so that the most important tasks, such as controlling the lunar module’s thrusters, would always be taken care of. “Hal was the most brilliant person we ever had the chance to work with,” Instrumentation Lab engineer Dan Lickly <em><a href="" target="_blank">told MIT Technology Review</a></em>. And that priority-driven operating system proved crucial in allowing the Apollo 11 landing to proceed safely in spite of the 1202 alarms going off during the lunar descent.</p> <p>While the majority of the team working on the project was male, software engineer Dana Densmore recalls that compared to the heavily male-dominated workforce at NASA at the time, the MIT lab was relatively welcoming to women. Densmore, who was a control supervisor for the lunar landing software, told <em>The Wall Street Journal</em> that “NASA had a few women, and they kept them hidden. At the lab it was very different,” and there were opportunities for women there to take on significant roles in the project.</p> <p>Hamilton recalls the atmosphere at the Instrumentation Lab in those days as one of real dedication and meritocracy. As she told <a href="">MIT News in 2009</a>, “Coming up with solutions and new ideas was an adventure. Dedication and commitment were a given. Mutual respect was across the board. Because software was a mystery, a black box, upper management gave us total freedom and trust. We had to find a way and we did. Looking back, we were the luckiest people in the world; there was no choice but to be pioneers.”</p> The computer system and software that controlled the Apollo 11 spacecraft — called the Apollo Guidance Computer and designed by the MIT Instrumentation Lab team — helped push technology forward in many ways. The computer itself was the first significant use of silicon integrated circuit chips.Image courtesy of the MIT MuseumAeronautical and astronautical engineering, EAPS, Research, Satellites, School of Engineering, History of MIT, NASA, History of science, Moon, Space, astronomy and planetary science, Women in STEM, School of Science Professor Emeritus Fernando Corbató, MIT computing pioneer, dies at 93 Longtime MIT professor developed early “time-sharing” operating systems and is widely credited as the creator of the world’s first computer password. Mon, 15 Jul 2019 09:01:00 -0400 Adam Conner-Simons | Rachel Gordon | MIT CSAIL <p>Fernando “Corby” Corbató, an MIT professor emeritus whose work in the 1960s on time-sharing systems broke important ground in democratizing the use of computers, died on Friday, July 12, at his home in Newburyport, Massachusetts. He was 93.</p> <p>Decades before the existence of concepts like cybersecurity and the cloud, Corbató led the development of one of the world’s first operating systems. His “Compatible Time-Sharing System” (CTSS) allowed multiple people to use a computer at the same time, greatly increasing the speed at which programmers could work. It’s also widely credited as the <a href="" target="_blank">first computer system to use passwords</a>.&nbsp;</p> <p>After CTSS Corbató led a time-sharing effort called Multics, which directly inspired operating systems like Linux and laid the foundation for many aspects of modern computing. Multics doubled as a fertile training ground for an emerging generation of programmers that included C programming language creator Dennis Ritchie, Unix developer Ken Thompson, and spreadsheet inventors Dan Bricklin and Bob Frankston.</p> <p>Before time-sharing, using a computer was tedious and required detailed knowledge. Users would create programs on cards and submit them in batches to an operator, who would enter them to be run one at a time over a series of hours. Minor errors would require repeating this sequence, often more than once.</p> <p>But with CTSS, which was first demonstrated in 1961, answers came back in mere seconds, forever changing the model of program development. Decades before the PC revolution, Corbató and his colleagues also opened up communication between users with early versions of email, instant messaging, and word processing.&nbsp;</p> <p>“Corby was one of the most important researchers for making computing available to many people for many purposes,” says long-time colleague Tom Van Vleck. “He saw that these concepts don’t just make things more efficient; they fundamentally change the way people use information.”</p> <p>Besides making computing more efficient, CTSS also inadvertently helped establish the very concept of digital privacy itself. With different users wanting to keep their own files private, CTSS introduced the idea of having people create individual accounts with personal passwords. Corbató’s vision of making high-performance computers available to more people also foreshadowed trends in cloud computing, in which tech giants like Amazon and Microsoft rent out shared servers to companies around the world.&nbsp;</p> <p>“Other people had proposed the idea of time-sharing before,” says Jerry Saltzer, who worked on CTSS with Corbató after starting out as his teaching assistant. “But what he brought to the table was the vision and the persistence to get it done.”</p> <p>CTSS was also the spark that convinced MIT to launch “Project MAC,” the precursor to the Laboratory for Computer Science (LCS). LCS later merged with the Artificial Intelligence Lab to become MIT’s largest research lab, the <a href="" target="_blank">Computer Science and Artificial Intelligence Laboratory</a> (CSAIL), which is now home to more than 600 researchers.&nbsp;</p> <p>“It’s no overstatement to say that Corby’s work on time-sharing fundamentally transformed computers as we know them today,” says CSAIL Director Daniela Rus. “From PCs to smartphones, the digital revolution can directly trace its roots back to the work that he led at MIT nearly 60 years ago.”&nbsp;</p> <p>In 1990 Corbató was honored for his work with the Association of Computing Machinery’s <a href="" target="_blank">Turing Award</a>, often described as “the Nobel Prize for computing.”</p> <p><strong>From sonar to CTSS</strong></p> <p>Corbató was born on July 1, 1926 in Oakland, California. At 17 he enlisted as a technician in the U.S. Navy, where he first got the engineering bug working on a range of radar and sonar systems. After World War II he earned his bachelor's degree at Caltech before heading to MIT to complete a PhD in physics.&nbsp;</p> <p>As a PhD student, Corbató met Professor Philip Morse, who recruited him to work with his team on Project Whirlwind, the first computer capable of real-time computation. After graduating, Corbató joined MIT's Computation Center as a research assistant, soon moving up to become deputy director of the entire center.&nbsp;</p> <p>It was there that he started thinking about ways to make computing more efficient. For all its innovation, Whirlwind was still a rather clunky machine. Researchers often had trouble getting much work done on it, since they had to take turns using it for half-hour chunks of time. (Corbató said that it had a habit of crashing every 20 minutes or so.)&nbsp;</p> <p>Since computer input and output devices were much slower than the computer itself, in the late 1950s a scheme called multiprogramming was developed to allow a second program to run whenever the first program was waiting for some device to finish. Time-sharing built on this idea, allowing other programs to run while the first program was waiting for a human user to type a request, thus allowing the user to interact directly with the first program.</p> <p>Saltzer says that Corbató pioneered a programming approach that would be described today as agile design.&nbsp;</p> <p>“It’s a buzzword now, but back then it was just this iterative approach to coding that Corby encouraged and that seemed to work especially well,” he says.&nbsp;&nbsp;</p> <p>In 1962 Corbató published a paper about CTSS that quickly became the talk of the slowly-growing computer science community. The following year MIT invited several hundred programmers to campus to try out the system, spurring a flurry of further research on time-sharing.</p> <p>Foreshadowing future technological innovation, Corbató was amazed — and amused — by how quickly people got habituated to CTSS’ efficiency.</p> <p>“Once a user gets accustomed to [immediate] computer response, delays of even a fraction of a minute are exasperatingly long,” he presciently wrote in his 1962 paper. “First indications are that programmers would readily use such a system if it were generally available.”</p> <p>Multics, meanwhile, expanded on CTSS’ more ad hoc design with a hierarchical file system, better interfaces to email and instant messaging, and more precise privacy controls. Peter Neumann, who worked at Bell Labs when they were collaborating with MIT on Multics, says that its design prevented the possibility of many vulnerabilities that impact modern systems, like “buffer overflow” (which happens when a program tries to write data outside the computer’s short-term memory).&nbsp;</p> <p>“Multics was so far ahead of the rest of the industry,” says Neumann. “It was intensely software-engineered, years before software engineering was even viewed as a discipline.”&nbsp;</p> <p>In spearheading these time-sharing efforts, Corbató served as a soft-spoken but driven commander in chief — a logical thinker who led by example and had a distinctly systems-oriented view of the world.</p> <p>“One thing I liked about working for Corby was that I knew he could do my job if he wanted to,” says Van Vleck. “His understanding of all the gory details of our work inspired intense devotion to Multics, all while still being a true gentleman to everyone on the team.”&nbsp;</p> <p>Another legacy of the professor’s is “Corbató’s Law,” which states that the number of lines of code someone can write in a day is the same regardless of the language used. This maxim is often cited by programmers when arguing in favor of using higher-level languages.</p> <p>Corbató was an active member of the MIT community, serving as associate department head for computer science and engineering from 1974 to 1978 and 1983 to 1993. He was a member of the National Academy of Engineering, and a fellow of the Institute of Electrical and Electronics Engineers and the American Association for the Advancement of Science.&nbsp;</p> <p>Corbató is survived by his wife, Emily Corbató, from Brooklyn, New York; his stepsons, David and Jason Gish; his brother, Charles; and his daughters, Carolyn and Nancy, from his marriage to his late wife Isabel; and five grandchildren.&nbsp;</p> <p>In lieu of flowers, gifts may be made to MIT’s <a href="" target="_blank">Fernando Corbató Fellowship Fund</a> via <a href="">Bonny Kellermann</a> in the Memorial Gifts Office.&nbsp;</p> <p>CSAIL will host an event to honor and celebrate Corbató in the coming months.&nbsp;</p> Fernando "Corby" Corbató led the development of early “time-sharing” operating systems, including Multics and the Compatible Time-Sharing System (CTSS).Image courtesy of the Corbató family.Faculty, Obituaries, Artificial intelligence, Cyber security, Computer science and technology, History of science, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering 3Q: David Mindell on his vision for human-centered robotics Engineer and historian discusses how the MIT Schwarzman College of Computing might integrate technical and humanistic research and education. Tue, 18 Jun 2019 14:35:01 -0400 School of Humanities, Arts, and Social Sciences <p><em>David Mindell, Frances and David Dibner Professor of the History of Engineering and Manufacturing in the School of Humanities, Arts, and Social Sciences and professor of aeronautics and astronautics, researches the intersections of human behavior, technological innovation, and automation. Mindell is the author of five acclaimed books, most recently "Our Robots, Ourselves: Robotics and the Myths of Autonomy" (Viking, 2015) as well as the co-founder of the Humatics Corporation, which develops technologies for human-centered automation. SHASS Communications spoke with Mindell recently on how his vision for human-centered robotics is developing and his thoughts about the new MIT Stephen A. Schwarzman College of Computing, which aims to integrate technical and humanistic research and education.&nbsp;&nbsp;</em><br /> &nbsp;<br /> <strong>Q:</strong> Interdisciplinary programs have proved challenging to sustain, given the differing methodologies and vocabularies of the fields being brought together. How might the MIT Schwarzman College of Computing design the curriculum to educate "bilinguals" — students who are adept in both advanced computation and one of more of the humanities, arts, and social science fields?<br /> &nbsp;<br /> <strong>A:</strong> Some technology leaders today are naive and uneducated in humanistic and social thinking. They still think that technology evolves on its own and “impacts” society, instead of understanding technology as a human and cultural expression, as part of society.<br /> <br /> As a historian and an engineer, and MIT’s only faculty member with a dual appointment in engineering and the humanities, I’ve been “bilingual” my entire career (long before we began using that term for fluency in both humanities and technology fields). My education started with firm grounding in two fields — electrical engineering and history — that I continue to study.<br /> <br /> Dual competence is a good model for undergraduates at MIT today as well. Pick two: not necessarily the two that I chose, but any two disciplines that capture the core of technology and the core of the humanities. Disciplines at the undergraduate level provide structure, conventions, and professional identity (although my appointment is in Aero/Astro, I still identify as an electrical engineer). I prefer the term “dual disciplinary” to “interdisciplinary.”&nbsp;<br /> <br /> The College of Computing curriculum should focus on fundamentals, not just engineering plus some dabbling in social implications.<br /> <br /> It sends the wrong message to students that “the technical stuff is core, and then we need to add all this wrapper humanities and social sciences around the engineering.” Rather, we need to say: “master two fundamental ways of thinking about the world, one technical and one humanistic or social.” Sometimes these two modes will be at odds with each other, which raises critical questions. Other times they will be synergistic and energizing. For example, my historical work on the Apollo guidance computer inspired a great deal of my current engineering work on precision navigation.<br /> <br /> <strong>Q:</strong> In naming the company you founded Humatics, you’ve combined “human” and “robotics,” highlighting the synergy between human beings and our advanced technologies. What projects underway at Humatics define and demonstrate how you envision people working collaboratively with machines?&nbsp;<br /> <br /> <strong>A:</strong> Humatics builds on the synthesis that has defined my career — the name is the first four letters of “human" and the last four letters of “robotics.” Our mission is to build technologies that weave robotics into the human world, rather than shape human behavior to the limitations of the robots. We do very technical stuff: We build our own radar chips, our own signal processing algorithms, our own AI-based navigation systems. But we also craft our technologies to be human-centered, to give users and workers information that enables them to make their own decisions and work safer and more efficiently.<br /> <br /> We’re currently working to incorporate our ultra-wideband navigation systems into subway and mass transit systems. Humatics' technologies will enable modern signaling systems to be installed more quickly and less expensively. It's gritty, dirty work down in the tunnels, but it is a “smart city” application that can improve the daily lives of millions of people. By enabling the trains to navigate themselves with centimeter-precision, we enable greater rush-hour throughput, fewer interruptions, even improved access for people with disabilities, at a minimal cost compared to laying new track.<br /> <br /> A great deal of this work focuses on reliability, robustness, and safety. These are large technological systems that MIT used to focus on in the Engineering Systems Division. They are legacy infrastructure running at full capacity, with a variety of stakeholders, and technical issues hashed out in political debate. As an opportunity to improve peoples' lives with our technology, this project is very motivating for the Humatics team.<br /> <br /> We see a subway system as a giant robot that collaborates with millions of people every day. Indeed, for all their flaws, it does so today in beautifully fluid ways.&nbsp;Disruption is not an option. Similarly, we see factories, e-commerce fulfillment centers, even entire supply chains as giant human-machine systems that combine three key elements: people, robots (vehicles), and infrastructure. Humatics builds the technological glue that ties these systems together.<br /> <br /> <strong>Q:</strong> Autonomous cars were touted to be available soon, but their design has run into issues and ethical questions. Is there a different approach to the design of artificially intelligent vehicles, one that does not attempt to create&nbsp;fully autonomous vehicles?&nbsp;If so, what are the barriers or resistance to human-centered approaches?<br /> <br /> <strong>A:</strong> Too many engineers still imagine autonomy as meaning “alone in the world.” This approach derives from a specific historical imagination of autonomy, derived from Defense Advanced Research Projects Agency sponsorship and elsewhere, that a robot should be independent of all infrastructure. While that’s potentially appropriate for military operations, the promise of autonomy on our roads must be the promise of autonomy in the human world, in myriad&nbsp;exquisite relationships.<br /> <br /> Autonomous vehicle companies are learning, at great expense, that they already depend heavily on infrastructure (including roads and traffic signs) and that the sooner they learn to embrace it, the sooner they can deploy at scale. Decades of experience have taught us that, to function in the human world, autonomy must be connected, relational, and situated. Human-centered autonomy in automobiles must be more than a fancy FitBit on a driver; it must factor into the fundamental design of the systems: What do we wish to control? Whom do we trust? Who owns our data? How are our systems trained? How do they handle failure? Who gets to decide?<br /> <br /> The current crisis over the Boeing 737 MAX control systems show these questions are hard to get right, even in aviation. There we have a great deal of regulation, formalism, training, and procedure, not to mention a safety culture that evolved over a century. For autonomous cars, with radically different regulatory settings and operating environments, not to mention non-deterministic software, we still have a great deal to learn. Sometimes I think it could take the better part of this century to really learn how to build robust autonomy into safety-critical systems at scale.<br /> &nbsp;</p> <h5><em>Interview prepared by MIT SHASS Communications<br /> Editorial and Design Director: Emily Hiestand<br /> Interview conducted by writer Maria Iacobo</em><br /> &nbsp;</h5> "As an engineer and historian, I’ve been 'bilingual' my entire career,” says David Mindell, the Dibner Professor of the History of Engineering and Manufacturing; professor of aeronautics and astronautics; and co-founder and CEO of Humatics Corporation. “Dual competence is a good model for undergraduates at MIT as well."Photo: Len RosensteinSchool of Humanities Arts and Social Sciences, School of Engineering, Faculty, Technology and society, Program in STS, Human-computer interaction, History of science, MIT Schwarzman College of Computing, History, 3 Questions, Autonomous vehicles, Aeronautical and astronautical engineering, Defense Advanced Research Projects Agency (DARPA), Industry A scholar and teacher re-examines moments in the history of STEM “I love teaching,” says PhD student Clare Kim. “It’s not that I’m just imparting knowledge, but I want [my students] to develop a critical way of thinking.” Thu, 13 Jun 2019 23:59:59 -0400 Daysia Tolentino | MIT News correspondent <p>When Clare Kim began her fall 2017 semester as the teaching assistant for 21H.S01, the inaugural “MIT and Slavery” course, she didn’t know she and her students would be creating a historical moment of their own at the Institute.</p> <p>Along with Craig Steven Wilder, the Barton L. Weller Professor of History, and Nora Murphy, an archivist for researcher services in the MIT Libraries, Kim helped a team of students use archival materials to examine the Institute’s ties to slavery and how that legacy has impacted the modern structure of scientific institutions. The findings that came to light through the class thrust Kim and her students onto a prominent stage. They spoke about their research in media interviews and at a standing-room-only <a href="">community forum</a>, and helped bring MIT into a national conversation about universities and the institution of slavery in the United States.</p> <p>For Kim, a PhD student in MIT’s Program in History, Anthropology, and Science, Technology, and Society (HASTS), it was especially rewarding to help the students to think critically about their own scientific work through a historical context. She enjoyed seeing how the course challenged conventional ideas that had been presented to them about their various fields of study.</p> <p>“I think people tend to think too much about history as a series of true facts where the narrative that gets constructed is stabilized. Conducting historical research is fun because you have a chance to re-examine evidence, examine archival materials, reinterpret some of what has already been written, and craft a new narrative as a result,” Kim says.</p> <p>This year, Kim was awarded the prestigious Goodwin Medal for her work as a TA for several MIT courses. The award recognizes graduate teaching assistants that have gone the extra mile in the classroom. Faculty, colleagues, and former students praised Kim for her compassionate, supportive, and individual approach to teaching.</p> <p>“I love teaching,” she says. “I like to have conversations with my students about what I’m thinking about. It’s not that I’m just imparting knowledge, but I want them to develop a critical way of thinking. I want them to be able to challenge whatever analyses I introduce to them.”</p> <p>Kim also applies this critical-thinking lens to her own scholarship in the history of mathematics. She is particularly interested in studying math this way because the field is often perceived as “all-stable” and contained, when in fact its boundaries have been much more fluid.</p> <p><strong>Mathematics and creativity</strong></p> <p>Kim’s own work re-examines the history of mathematical thought and how it has impacted nonscientific and technical fields in U.S. intellectual life. Her dissertation focuses on the history of mathematics and the ways that mathematicians interacted with artists, humanists, and philosophers throughout the 20th century. She looks at the dialogue and negotiations between different scholars, exploring how they reconfigured the boundaries between academic disciplines.</p> <p>Kim says that this moment in history is particularly interesting because it reframes mathematics as a field that hasn’t operated autonomously, but rather has engaged with humanistic and artistic practices. This creative perspective, she says, suggests an ongoing, historical relationship between mathematics and the arts and humanities that may come as a surprise to those more likely to associate mathematics with technical and military applications, at least in terms of practical uses.</p> <p>“Accepting this clean divide between mathematics and the arts occludes all of these fascinating interactions and conversations between mathematicians and nonmathematicians about what it meant to be modern and creative,” Kim says. One such moment of interaction she explores is between mathematicians and design theorists in the 1930s, who worked together in an attempt to develop and teach a mathematical theory of “aesthetic measure,” a way of ascribing judgments of beauty and taste. &nbsp;</p> <p><strong>Building the foundation</strong></p> <p>With an engineering professor father and a mathematician mother, Kim has long been interested in science and mathematics. However, she says influences from her family, which includes a twin sister who is a classicist and an older sister who studied structural biology, ensured that she would also develop a strong background in the humanities and literature.</p> <p>Kim entered college thinking that she would pursue a technical field, though likely not math itself — she jokes that her math career peaked during her time competing in MATHCOUNTS as a kid. But during her undergraduate years at Brown University, she took a course on the history of science taught by Joan Richards, a professor specializing in the history of mathematics. There, she discovered her interest in studying not just scientific knowledge, but the people who pursue it.</p> <p>After earning a bachelor’s in history at Brown, with a focus in mathematics and science, Kim decided to pursue a doctoral degree. MIT’s HASTS program appealed to her because of its interdisciplinary approach to studying the social and political components of science and technology.</p> <p>“In addition to receiving more formal training in the history of science itself, HASTS trained me in anthropological inquiry, political theory, and all these different kinds of methods that could be brought to bear on the social sciences and humanities more generally,” Kim says.</p> <p>After defending her thesis, Kim will begin a postdoc at Washington University in St. Louis, where she will continue her research and begin converting her dissertation into a book manuscript. She will also be teaching a course she has developed called “Code and Craft,” a course that explores, in a variety of historical contexts, the artful and artisanal components of AI, computing, and otherwise “technical” domains.</p> <p>In her free time, Kim practices taekwondo (she has a first-degree black belt) and enjoys taking long walks through Cambridge, which she says is how she gets some of her best thinking done.</p> PhD student Clare Kim studies the history of mathematics and the ways that mathematicians interacted with artists, humanists, and philosophers throughout the 20th century.Image: Jared CharneyGraduate, postdoctoral, Profile, History, Mathematics, History of science, School of Humanities Arts and Social Sciences, Diversity and inclusion, Technology and society, Race and gender, Program in STS, Students Dwaipayan Banerjee receives 2019 Levitan Prize in the Humanities Award will support research for &quot;A Counter History of Computing in India.&quot; Mon, 10 Jun 2019 10:20:01 -0400 School of Humanities, Arts, and Social Sciences <p>Assistant Professor Dwaipayan Banerjee of the Program in Science, Technology, and Society (STS) has been awarded the 2019 James A. (1945) and Ruth Levitan Prize in the Humanities. The prestigious award comes with&nbsp;a $29,500 grant that will support Banerjee's research on the history of computing in India.<br /> <br /> Melissa Nobles, the Kenan Sahin Dean of MIT’s School of Humanities, Arts, and Social Sciences (SHASS), announced the award, noting that a committee of senior faculty had reviewed submissions for the Levitan Prize and selected Banerjee’s proposal as the most outstanding.<br /> <br /> “Dwai’s work is extremely relevant today, and I look forward to seeing how his new project expands our understanding of technology and technological culture as a part of the human world,” Nobles says.</p> <p><strong>Postcolonial India and computing</strong></p> <p>Banerjee’s scholarship centers on the social contexts of science, technology, and medicine in the global south. He has two book projects now nearing completion: "Enduring Cancer: Health and Everyday Life in Contemporary India" (forthcoming in 2020, Duke University Press) and "Hematologies: The Political Life of Blood in India" (forthcoming in 2019, Cornell University Press; co-authored with J. Copeman). Both books assess how India’s post-colonial history has shaped, and been shaped by, practices of biomedicine and health care.<br /> <br /> Banerjee says he was delighted to receive the Levitan Award, which is presented annually by SHASS to support innovative and creative scholarship in one of the Institute’s humanities, arts, or social science fields. “Its funds will go a long way in helping explore archives about computational research and technology spread across India, some of which have yet to receive sustained scholarly attention,” he says.</p> <p><strong>Global computing histories</strong><br /> <br /> Banerjee's Levitan project will investigate the post-colonial history of computing in India from the 1950s to today. “Contemporary scholarly and popular narratives about computing in India suggest that, even as India supplies cheap IT labor to the rest of the world, the country lags behind in basic computing research and development,” he says. “My new project challenges these representations.”<br /> <br /> Banerjee adds, “In presenting this account, I urge social science research, which has predominantly focused on the history of computing in Europe and the United States, to take account of more global histories of computing.”<br /> <br /> The project, titled "A Counter History of Computing in India," will trace major shifts in the relation between the Indian state and computing research and practice. Banerjee explains that “In the first decades after India’s independence, the postcolonial state sought to develop indigenous computing expertise and infrastructure by creating public institutions of research and education, simultaneously limiting private enterprise and the entry of global capital.”</p> <p>Noting that today the vision for development relies heavily on private entrepreneurship, Banerjee asks: “Why and how did the early post-colonial vision of publicly-driven computing research and development decline?”<br /> <br /> <strong>Policy, computing, and outsourcing</strong><br /> <br /> More broadly, Banerjee plans to investigate how changing policies have impacted the development of computing and shaped the global distribution of expertise and labor. “After economic liberalization in the 1980s, a transformed Indian state gave up its protectionist outlook and began to court global corporations, giving rise to the new paradigm of outsourcing."<br /> <br /> Banerjee says he will endeavor to answer the question, “What is lost when a handful of U.S.-based corporations seek to determine hierarchies of technology work and control how its social benefits are globally distributed?” The Levitan Prize will support Banerjee's field research in India and help him develop a multi-city archive of primary sources relating to the history of computational science and technology in the region.<br /> <br /> First awarded in 1990, the Levitan Prize in the Humanities was established through a gift from the late James A. Levitan, a 1945 MIT graduate in chemistry who was also a member of the MIT Corporation.<br /> &nbsp;</p> <h5><em>Story prepared by MIT SHASS Communications<br /> Editorial and Design Director: Emily Hiestand<br /> Writer: Kathryn O'Neill</em></h5> Dwaipayan Banerjee says he will endeavor to answer the question: What is lost when a handful of U.S.-based corporations seek to determine hierarchies of technology work and control how its social benefits are globally distributed? Photo: Jon Sachs/MIT SHASS Communications School of Humanities Arts and Social Sciences, Program in STS, Awards, honors and fellowships, India, History, Computing, Faculty, History of science, Economics CSAIL hosts first-ever TEDxMIT Speakers — all women — discuss everything from gravitational waves to robot nurses. Fri, 31 May 2019 15:00:01 -0400 Adam Conner-Simons | Rachel Gordon | MIT CSAIL <p>On Tuesday, May 28, MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) hosted a special TEDx event featuring an all-female line-up of MIT scientists and researchers who discussed cutting-edge ideas in science and technology.</p> <p>TEDxMIT speakers included roboticists, engineers, astronomers, and policy experts, including former White House chief technology officer Megan Smith ’86 SM ’88 and MIT Institute Professor Emerita Barbara Liskov, <a href="" target="_blank">winner of the A.M. Turing Award</a>, often considered the “Nobel Prize for computing.”</p> <p>From Professor Nergis Mavalvala’s <a href="">work on gravitational waves</a> to Associate Provost Krystyn Van Vliet’s <a href="">efforts to improve cell therapy</a>, the afternoon was filled with energizing and historic success stories of women in STEM.</p> <p>In an early talk, MIT Associate Professor Julie Shah touched on the much-discussed narrative of artificial intelligence and job displacement, and how that relates to her own work creating systems that she described as “being intentional about augmenting human capabilities.”She spoke about her efforts developing robots to help reduce the cognitive burden of overwhelmed workers, like the nurses on labor wards who have to make <a href="" target="_self">hundreds of split-second decisions</a> for scheduling deliveries and C-sections.</p> <p>“We can create a future where we don’t have robots who replace humans, but that help us accomplish what neither group can do alone,” said Shah.</p> <p>CSAIL Director Daniela Rus, a professor of electrical engineering and computer science, spoke of how computer scientists can inspire the next generation of programmers by emphasizing the many possibilities that coding opens up.</p> <p>“I like to say that those of us who know how to ... breathe life into things through programming have superpowers,” said Rus.</p> <p>Throughout the day scientists showed off technologies that could fundamentally transform many industries, from Professor Dava Newman’s <a href="" target="_self">prototype Mars spacesuit</a> to Associate Professor Vivienne Sze’s <a href="" target="_self">low-power processors for machine learning</a>.</p> <p>Judy Brewer, director of the World Wide Web Consortium’s <a href="" target="_blank">Web Accessibility Initiative</a>, discussed the ways in which the web has made the world a more connected place for those with disabilities — and yet, how important it is for the people who design digital technologies to be better about making them accessible.</p> <p>“When the web became available, I could go and travel anywhere,” Brewer said. “There’s a general history of excluding people with disabilities, and then we go and design tech that perpetuates that exclusion. In my vision of the future everything is accessible, including the digital world.”</p> <p>Liskov captivated the audience with her tales of the early days of computer programming. She was asked to learn Fortran on her first day of work in 1961 — having never written a line of code before.</p> <p>“I didn’t have any training,” she said. “But then again, nobody did.”</p> <p>In 1971 Liskov joined MIT, where she created the programming language CLU, which established the notion of “abstract data types” and laid the groundwork for languages like Java and C#. Many coders now take so-called “object-oriented programming” (OOP) for granted: She wryly reflected on how, after she won the Turing Award, one internet commenter looked at her contributions to data abstraction and pointed out that “everybody knows that, anyway.”</p> <p>“It was a statement to how much the world has changed,” she said with a smile. “When I was doing that work decades earlier, nobody knew anything about [OOP].”</p> <p>Other researchers built off of Liskov’s remarks in discussing the birth of big data and machine learning. Professor Ronitt Rubinfeld spoke about how computer scientists’ work in sublinear time algorithms has allowed them to better make sense of large amounts of data, while Hamsa Balakrishnan spoke about the ways in which algorithms can help systems engineers make air travel more efficient.</p> <p>The event’s overarching theme was highlighting examples of female role models in fields where they’ve often been overlooked. Paula Hammond, head of MIT’s Department of Chemical Engineering, touted the fact that more than half of undergraduates in her department this year were women. Rus urged the women in the audience, many of whom were MIT students, to think about what role they might want to play in continuing to advance science in the coming years.</p> <p>“To paraphrase our hometown hero, President John F. Kennedy, we need to prepare [women] to see both what technology can do for them — and what they can do for technology,” Rus said.</p> <p>Rus led the planning of the TEDxMIT event alongside MIT research affiliate John Werner and student directors Stephanie Fu and Rucha Kelkar, both first-years.</p> MIT Professor Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory, kicked off an all-female lineup of speakers at TEDxMIT.Photo: Jason Dorfman/MIT CSAILSpecial events and guest speakers, Technology and society, Women in STEM, Faculty, Alumni/ae, Diversity and inclusion, Computer science and technology, History of science, Chemical engineering, Aeronautical and astronautical engineering, Electrical Engineering & Computer Science (eecs), Physics, DMSE, School of Engineering, School of Science, Assistive technology A new era in 3-D printing Mechanical engineering researchers are inventing game-changing technologies and developing a renaissance in 3-D printing. Thu, 16 May 2019 10:35:01 -0400 Mary Beth O'Leary | Department of Mechanical Engineering <p>In the mid-15th century, a new technology that would change the course of history was invented. Johannes Gutenberg’s printing press, with its movable type, promoted the dissemination of information and ideas that is widely recognized as a major contributing factor for the Renaissance.</p> <p>Over 500 years later, a new type of printing was invented in the labs of MIT. Emanuel Sachs, professor of mechanical engineering, invented a process known as binder jet printing. In binder jet printing, an inkjet printhead selectively drops a liquid binder material into a powder bed — creating a three-dimensional object layer by layer.</p> <p>Sachs coined a new name for this process: 3-D printing. “My father was a publisher and my mother was an editor,” explains Sachs. “Growing up, my father would take me to the printing presses where his books were made, which influenced my decision to name the process 3-D printing.”</p> <p>Sachs’ binder jet printing process was one of several technologies developed in the 1980s and '90s in the field now known as additive manufacturing, a term that has come to describe a wide variety of layer-based production technologies. Over the past three decades, there has been an explosion in additive manufacturing research. These technologies have the potential to transform the way countless products are designed and manufactured.<br /> <br /> One of the most immediate applications of 3-D printing has been the rapid prototyping of products. “It takes a long time to prototype using traditional manufacturing methods,” explains Sachs. 3-D printing has transformed this process, enabling rapid iteration and testing during the product development process.</p> <p>This flexibility has been a game-changer for designers. “You can now create dozens of designs in CAD, input them into a 3-D printer, and in a matter of hours you have all your prototypes,” adds Maria Yang, professor of mechanical engineering and director of MIT’s Ideation Laboratory. “It gives you a level of design exploration that simply wasn’t possible before.”</p> <p>Throughout MIT’s Department of Mechanical Engineering, many faculty members have been finding new ways to incorporate 3-D printing across a vast array of research areas. Whether it’s printing metal parts for airplanes, printing objects on a nanoscale, or advancing drug discovery by printing complex biomaterial scaffolds, these researchers are testing the limits of 3-D printing technologies in ways that could have lasting impact across industries.</p> <p><strong>Improving speed, cost, and accuracy </strong></p> <p>There are several technological hurdles that have prevented additive manufacturing from having an impact on the level of Gutenberg’s printing press. A. John Hart, associate professor of mechanical engineering and director of MIT’s Laboratory for Manufacturing and Productivity, focuses much of his research on addressing those issues.</p> <p>“One of the most important barriers to making 3-D printing accessible to designers, engineers, and manufacturers across the product life cycle is the speed, cost, and quality of each process,” explains Hart.</p> <p>His research seeks to overcome these barriers, and to enable the next generation of 3-D printers that can be used in the factories of the future. For this to be accomplished, synergy among machine design, materials processing, and computation is required.</p> <p>To work toward achieving this synergy, Hart’s research group examined the processes involved in the most well-known style of 3-D printing: extrusion. In extrusion, plastic is melted and squeezed through a nozzle in a printhead.</p> <p>“We analyzed the process in terms of its fundamental limits — how the polymer could be heated and become molten, how much force is required to push the material through the nozzle, and the speed at which the printhead moves around,” adds Hart.</p> <p>With these new insights, Hart and his team designed a new printer that operated at speeds 10 times faster than existing printers. A gear that would have taken one to two hours to print could now be ready in five to 10 minutes. This drastic increase in speed is the result of a novel printhead design that Hart hopes will one day be commercialized for both desktop and industrial printers.</p> <p>While this new technology could improve our ability to print plastics quickly, printing metals requires a different approach. For metals, precise quality control is especially important for industrial use of 3-D printing. Metal 3-D printing has been used to create objects ranging from airplane fuel nozzles to hip implants, yet it is only just beginning to become mainstream. Items made using metal 3-D printing are particularly susceptible to cracks and flaws due to the large thermal gradients inherent in the process.</p> <p>To solve this problem, Hart is embedding quality control within the printers themselves. “We are building instrumentation and algorithms that monitor the printing process and detect if there are any mistakes — as small as a few micrometers — as the objects are being printed,” Hart explains.</p> <p>This monitoring is complemented by advanced simulations, including models that can predict how the powder used as the feedstock for printing is distributed and can also identify how to modify the printing process to account for variations.</p> <p>Hart’s group has been pioneering the use of new materials in 3-D printing. He has developed methods for printing with cellulose, the world’s most abundant polymer, as well as carbon nanotubes, nanomaterials that could be used in flexible electronics and low-cost radio frequency tags.</p> <p>When it comes to 3-D printing on a nanoscale, Hart’s colleague Nicholas Xuanlai Fang, professor of mechanical engineering, has been pushing the limits of how small these materials can be.</p> <p><strong>Printing nanomaterials using light</strong></p> <p>Inspired by the semiconductor and silicon chip industries, Fang has developed a 3-D printing technology that enables printing on a nanoscale. As a PhD student, Fang first got interested in 3-D printing while looking for a more efficient way to make the microsensors and micropumps used for drug delivery.</p> <p>“Before 3-D printing, you needed expensive facilities to make these microsensors,” explains Fang. “Back then, you’d send design layouts to a silicon manufacturer, then you’d wait four to six months before getting your chip back.” The process was so time-intensive it took one of his labmates four years to get eight small wafers.</p> <p>As advances in 3-D printing technologies made manufacturing processes for larger products cheaper and more efficient, Fang began to research how these technologies might be used on a much smaller scale.</p> <p>He turned to a 3-D printing process known as stereolithography. In stereolithography, light is sent through a lens and causes molecules to harden into three-dimensional polymers — a&nbsp; process known as photopolymerization.</p> <p>The size of objects that could be printed using stereolithography were limited by the wavelength of the light being sent through the optic lens — or the so-called diffraction limit — which is roughly 400 nanometers. Fang and his team were the first researchers to break this limit.</p> <p>“We essentially took the precision of optical technology and applied it to 3-D printing,” says Fang. The process, known as projection micro-stereolithography, transforms a beam of light into a series of wavy patterns. The wavy patterns are transferred through silver to produce fine lines as small as 40 nm, which is 10 times smaller than the diffraction limit and 100 times smaller than the width of a strand of hair.</p> <p>The ability to pattern features this small using 3-D printing holds countless applications. One use for the technology Fang has been researching is the creation of a small foam-like structure that could be used as a substrate for catalytic conversion in automotive engines. This structure could treat greenhouse gases on a molecular level in the moments after an engine starts.</p> <p>“When you first start your engine, it’s the most problematic for volatile organic components and toxic gases. If we were to heat up this catalytic convertor quickly, we could treat those gases more effectively,” he explains.</p> <p>Fang has also created a new class of 3-D printed metamaterials using projection micro-stereolithography. These materials are composed of complex structures and geometries. Unlike most solid materials, the metamaterials don’t expand with heat and don’t shrink with cold.</p> <p>“These metamaterials could be used in circuit boards to prevent overheating or in camera lenses to ensure there is no shrinkage that could cause a lens in a drone or UAV to lose focus,” says Fang.</p> <p>More recently, Fang has partnered with Linda Griffith, School of Engineering Teaching Innovation Professor of Biological and Mechanical Engineering, to apply projection micro-stereolithography to the field of bioengineering.</p> <p><strong>Growing human tissue with the help of 3-D printing</strong></p> <p>Human cells aren’t programmed to grow in a two-dimensional petri dish. While cells taken from a human host might multiply, once they become thick enough they essentially starve to death without a constant supply of blood. This has proved particularly problematic in the field of tissue engineering, where doctors and researchers are interested in growing tissue in a dish to use in organ transplants.</p> <p>For the cells to grow in a healthy way and organize into tissue in vitro, they need to be placed on a structure or ‘scaffold.’&nbsp; In the 1990s, Griffith, an expert in tissue engineering and regenerative medicine, turned to a nascent technology to create these scaffolds — 3-D printing.</p> <p>“I knew that to replicate complex human physiology in vitro, we needed to make microstructures within the scaffolds to carry nutrients to cells and mimic the mechanical stresses present in the actual organ,” explains Griffith.</p> <p>She co-invented a 3-D printing process to make scaffolds from the same biodegradable material used in sutures. Tiny complex networks of channels with a branching architecture were printed within the structure of these scaffolds. Blood could travel through the channels, allowing cells to grow and eventually start to form tissue.&nbsp;</p> <p>Over the past two decades, this process has been used across various fields of medicine, including bone regeneration and growing cartilage in the shape of a human ear. While Griffith and her collaborators originally set out to regenerate a liver, much of their research has focused on how the liver interacts with drugs.</p> <p>“Once we successfully grew liver tissue, the next step was tackling the challenge of getting useful predicative drug development information from it,” adds Griffith.</p> <p>To develop more complex scaffolds that provide better predicative information, Griffith collaborated with Fang on applying his nano-3-D printing technologies to tissue engineering. Together, they have built a custom projection micro-stereolithography machine that can print high-resolution scaffolds known as liver mesophysiological systems (LMS). Micro-stereolithography printing allows the scaffolds that make up LMS to have channels as small as 40 microns wide. These small channels enable perfusion of the bioartificial organ at an elevated flow rate, which allows oxygen to diffuse throughout the densely packed cell mass.</p> <p>“By printing these microstructures in more minute detail, we are getting closer to a system that gives us accurate information about drug development problems like liver inflammation and drug toxicity, in addition to useful data about single-cell cancer metastasis,” says Griffith.</p> <p>Given the liver’s central role in processing and metabolizing drugs, the ability to mimic its function in a lab has the potential to revolutionize the field of drug discovery.</p> <p>Griffith’s team is also applying their projection micro-stereolithography technique to create scaffolds for growing induced pluripotent stem cells into human-like brain tissue. “By growing these stem cells in the 3-D printed scaffolds, we are hoping to be able to create the next generation of more mature brain organoids in order to study complex diseases like Alzheimer's,” explains Pierre Sphabmixay, a mechanical engineering PhD candidate in Griffith’s lab.</p> <p><strong>Partnering with Industry</strong></p> <p>For 3-D printing to make a lasting impact on how products are both designed and manufactured, researchers need to work closely with industry. To help bridge this gap, the MIT Center for Additive and Digital Advanced Production Technologies (APT) was launched in late 2018.</p> <p>“The idea was to intersect additive manufacturing research, industrial development, and education across disciplines all under the umbrella of MIT,” explains Hart, who founded and serves as director of APT. “We hope that APT will help accelerate the adoption of 3-D printing, and allow us to better focus our research toward true breakthroughs beyond what can be imagined today.”</p> <p>Since APT launched in November 2018, MIT and the twelve company founding members — that include companies such as ArcelorMittal, Autodesk, Bosch, Formlabs, General Motors, and the Volkswagen Group — have met both at a large tradeshow in Germany and on campus. Most recently, they convened at MIT for a workshop on scalable workforce training for additive manufacturing.</p> <p>“We’ve created a collaborative nexus for APT’s members to unite and solve common problems that are currently limiting the adoption of 3-D printing — and more broadly, new concepts in digitally-driven production — at a large scale,” adds Haden Quinlan, program manager of APT.&nbsp; Many also consider Boston the epicenter of 3-D printing innovation and entrepreneurship, thanks in part to several fast-growing local startups founded by MIT faculty and alumni.</p> <p>Efforts like APT, coupled with the groundbreaking work being done in the sphere of additive manufacturing at MIT, could reshape the relationship between research, design and manufacturing for new products across industries.</p> <p>Designers could quickly prototype and iterate the design of products. Safer, more accurate metal hinges could be printed for use in airplanes or cars. Metamaterials could be printed to form electronic chips that don’t overheat. Entire organs could be grown from donor cells on 3-D printed scaffolds. While these technologies may not spark the next Renaissance as the printing press did, they offer solutions to some of the biggest problems society faces in the 21st century.</p> Seok Kim, a postdoc in Professor Nicholas Fang’s lab, holds up a 3-D-printed porous substrate that could be used as a catalytic reactor to remove toxic gases in cars and power plants. Photo: John FreidahMechanical engineering, School of Engineering, Manufacturing, Supply chains, Biological engineering, Additive manufacturing, History of science, Nanoscience and nanotechnology, Carbon nanotubes In cancer research, a winding road to discovery Book by MIT professor examines the circuitous history behind the investigation of cancer as a contagious illness. Tue, 14 May 2019 00:00:00 -0400 Peter Dizikes | MIT News Office <p>In 1961, people in the suburb of Niles, Illinois, experienced what they termed a “cancer epidemic.” Over a dozen children in the town were diagnosed with leukemia within a short time. Fears quickly spread that the illness could be contagious, carried by some type of “cancer virus.” News coverage soon identified several other towns with apparent “cancer clusters,” as well. Belief that cancer was a simple contagion, like polio or the flu, kept bubbling up.</p> <p>“People wrote [to medical authorities] well into the 1960s asking, ‘I lived in a house where somebody had cancer. Am I going to catch cancer?’” says Robin Scheffler, the Leo Marx CD Assistant Professor in the History and Culture of Science and Technology at MIT.</p> <p>Those fears were taken seriously. The National Cancer Institute (NCI) created the Special Virus Leukemia Program in 1964 and over the next 15 years spent more than $6.5 billion (in 2017 dollars) on cancer virus research intended to develop a vaccine. That’s more than the funding for the subsequent Human Genome Project, as Scheffler points out.</p> <p>The results of that funding were complex, unanticipated — and significant, as Scheffler details in his new book, “A Contagious Cause: The American Hunt for Cancer Viruses and the Rise of Molecular Medicine,” published this week by the University of Chicago Press.</p> <p>In the process, scientists did not find — and never have — a single viral cause of cancer. On the other hand, as a direct result of the NCI’s funding project, scientists did find oncogenes, the type of gene which, when activated, can cause many forms of cancer.</p> <p>“That investment helped drive the field of modern molecular biology,” Scheffler says. “It didn’t find the human cancer virus. But instead of closing down, it invented a new idea of how cancer is caused, which is the oncogene theory.”</p> <p>As research has continued, scientists today have identified hundreds of types of cancer, and about one out of every six cases has viral origins. While there is not one “cancer virus,” some vaccinations reduce susceptibility to certain kinds of cancer. In short, our understanding of cancer has become more sophisticated, specific, and effective — but the path of progress has had many twists and turns.&nbsp;</p> <p><strong>Less insurance, more research</strong></p> <p>As Scheffler details in his book, fears that cancer was a simple contagion can be traced back at least to the 18th century. They appear to have gained significant ground in the early 20th-century U.S., however, influencing medical research and even hospital design.</p> <p>The rise of massive funding for cancer research is mostly a post-World War II phenomenon; like much of Scheffler’s narrative, its story contains developments that would have been very hard to predict.</p> <p>For instance, as Scheffler chronicles, one of the key figures in the growth of cancer research was the midcentury health care activist Mary Lasker, who with her husband had founded the Lasker Foundation in 1942, and over time helped transform the American Cancer Society.</p> <p>During the presidency of Harry S. Truman, however, Lasker’s main goal was the creation of universal health insurance for Americans — an idea that seemed realistic for a time but was eventually shot down in Washington. That was a major setback for Lasker. In response, though, she became a powerful advocate for federal funding of medical research — especially through the National Institutes of Health (NIH), and the NCI, one of the NIH’s arms.</p> <p>Scheffler calls this tradeoff — less government health insurance, but more biomedical research — the “biomedical settlement,” and notes that it was unique to the U.S. at the time. By contrast, in grappling with cancer through the 1960s, Britain and France, for example, put more relative emphasis on treatment, and Germany looked more extensively at environmental issues. Since the 1970s, there has been more convergence in the approaches of many countries.</p> <p>“The term ‘biomedical settlement’ is a phrase I created to describe an idea that seems commonplace in the United States but is actually very extraordinary in the context of other industrial nations — which is, we will not federalize health care, but we will federalize health research,” Scheffler says. “It’s remarkable to keep the government out of one but invite it into the other.”</p> <p>And while observers of the U.S. scientific establishment today know the NIH as a singular research force, they probably don’t think of it as compensation, in a sense, for the failed policy aims of Lasker and her allies.</p> <p>“Someone like Mary Lasker is one of the architects of the settlement out of her conviction there were ways to involve the federal government even if they couldn’t provide medical care,” Scheffler adds.</p> <p><strong>Fighting through frustration</strong></p> <p>The core of “A Contagious Cause” chronicles critical research developments in the 1960s and 1970s, as biologists made headway in understanding many forms of cancer. But beyond its rich narrative about the search for a single cancer virus, “A Contagious Cause” also contains plenty of material that underscores the highly contingent, unpredictable nature of scientific discovery.</p> <p>From stymied scientists to angry activists, many key figures in the book seemed to have reached dead ends before making the advances we now recognize. Yes, science needs funding, new instrumentation, and rich theories to advance. But it can also be fueled by frustration.</p> <p>“The thing I find interesting is that there are a lot of moments of frustration,” Scheffler says. “Things don’t go the way people want, and they have to decide what they’re going to do next. I think often the history of science focuses on moments of discovery, or highlights great innovations and their successes. But talking about frustration and failure is also a very important topic to highlight in terms of how we understand the history of science.”</p> <p>“A Contagious Cause” has received praise from other scholars. Angela Creager, a historian of science at Princeton University, has called it “powerfully argued” and “vital reading for historians of science and political historians alike.”</p> <p>For his part, Scheffler says he hopes his book will both illuminate the history of cancer research in the U.S. and underscore the need for policymakers to apply a broad set of tools as they guide our ongoing efforts to combat cancer.</p> <p>“Cancer is a molecular disease, but it’s also an environmental disease and a social disease. We need to understand the problem at all those levels to come up with a policy that best confronts it,” Scheffler says.</p> Robin Scheffler and his new book, “A Contagious Cause: The American Hunt for Cancer Viruses and the Rise of Molecular Medicine.”Image: Jon Sachs/SHASS CommunicationsBooks and authors, Program in STS, History, Faculty, Biology, Disease, Funding, History of science, School of Humanities Arts and Social Sciences From science class to the stock exchange “I’m all about finding connections,” says senior Stephon Henry-Rerrie about his path from engineering to the financial sector. Tue, 30 Apr 2019 23:59:59 -0400 Gina Vitale | MIT News correspondent <p>Stephon Henry-Rerrie grew up in Brooklyn as the oldest of five siblings. He loved math puzzles from a young age and chose a premed track in his specialized high school. He never thought he’d study at MIT, but after being accepted to MIT’s Weekend Immersion in Science and Engineering (WISE), a program for high school seniors from underrepresented communities to learn about the MIT experience, he changed his mind.</p> <p>Before visiting MIT, “I could never see myself here, because it was just this ivory-tower looking place,” he says. “Whereas when I was here, and I was talking with people, I was like, ‘Oh, wow I can hang.’ Maybe I do belong here.”</p> <p>Henry-Rerrie, now a senior, has discovered many passions during his time at the Institute. He realized early on that he didn’t want to pursue medicine, and chose to major in chemical engineering. Then, after realizing how versatile physics could be, he picked that up as a second major. In four years, he has helped create particle simulations, worked on a trading floor, conducted research in the chemical engineering industry, and mentored younger MIT students. He would never have predicted ending up where he is now — but he wouldn’t trade it.</p> <p>“I have a very weird, nonlinear trajectory that I’ve taken,” he says. “But along the way I’ve learned lots of things about myself and about the world.”</p> <p><strong>In the market for growth</strong></p> <p>When Henry-Rerrie accepted an internship at Morgan Stanley the summer after his first year, he had no idea that he’d be working on the trading floor. Some similarities to the movie “Wall Street” were uncanny, he says — he was surrounded by bond traders, and his mentor underwrote municipal bonds. He says the experience of working in finance fundamentally changed his life. Not only did he learn to speak up among many powerful voices, he also realized that science and engineering are directly tied into economics. Research doesn’t happen in a vacuum — when scientists make discoveries, that impacts the economy.</p> <p>“I think I needed that exposure,” he says. “Because if I hadn’t, I feel like I wouldn’t have the perspective that I have now on, what does this all mean? What is going on? What’s this larger system that we exist in?”</p> <p>He really enjoyed working within the financial sector. And, after meeting a number of former physicists (and chemical engineers) now working in financial roles at Morgan Stanley, he realized that studying physics rather than economics wouldn’t hurt his chances of getting a job in finance — so he took on a double major and was thrilled to study another area he’s always been fascinated by.</p> <p>In his sophomore year, he worked in the lab of Assistant Professor James Swan, creating particle simulations with PhD student Zachary Sherman. The pair looked at how varying two different kinds of interactions between nanoparticles in solution affected those nanoparticles. Henry-Rerrie likens it to having a bunch of people (representing the particles) in a room where temperature and wind are controlled by two knobs. As you turn up the temperature knob, or the wind knob, or both knobs in varying amounts, the people will react.</p> <p>“What will those people be feeling? What will they do? ... I can turn those knobs and record, what did those people do at each specific value? And then after that, can we see a trend in how people will react?”</p> <p>The following summer, Henry-Rerrie took an internship at chemical engineering company Praxair. The people there were great, he says, but as he considered his options for the future, he found his heart was with financial markets. The following summer, he took a job at investment management company BlackRock.</p> <p>“I also found that finance touches everything, everybody’s life, in a very real way that you can’t get away from, at least now,” he says.</p> <p>For him, BlackRock was the perfect compromise between chemical engineering and finance. As much of his role involved risk and quantitative analysis, he was able to practice many of the techniques he learned in engineering, as well as do real work in the finance sector.</p> <p>“At my internship at BlackRock, I was able to apply everything that I learned,” he says. “Not necessarily the technical stuff, but the way of problem solving, of thinking.”</p> <p><strong>Chocolate City</strong></p> <p>When Henry-Rerrie was first visiting MIT, he was introduced to a living group called Chocolate City, in New House. The group consisted of black and Latino men supporting each other socially, academically, and professionally.</p> <p>“When I saw that, that was the signal to me that MIT is just a special place,” he says.</p> <p>He was accepted to live in Chocolate City his first year and has been there ever since. He has served in a variety of roles, including athletics chair, social chair, co-chair, and now resident peer mentor. He describes himself as the big brother of the house, working to get people to socialize and bond with each other. Living in the group has had its challenges, as its members come from diverse backgrounds and often have conflicting opinions. But that’s all part of the learning experience that makes it so valuable, he says.</p> <p>“Being in that ecosystem has, I think, developed me into the person I am now, and helped me to feel like I can take on, I can take on anything after I graduate here.”</p> <p>Henry-Rerrie loves being part of Chocolate City, and is grateful for how much it has developed him as a person. That’s why he’s chosen to give back to the other residents this year as the resident peer mentor, and why he plans to continue to help out as an alumnus. To him, Chocolate City is much more than a place to sleep and study.</p> <p>“I feel like I’m home,” he says of being a part of the living group. “I don’t feel like I’m at a dorm; I feel like I’m home.”</p> <p><strong>Science in context</strong></p> <p>Henry-Rerrie is grateful for the context that his humanities, arts, and social sciences (HASS) classes have given him in his scientific pursuits. He recalls one class, STS.042 / 8.225 (Physics in the Twentieth Century), that introduced him to an entire world of physics history. He learned everything from the politics underlying physics to the fact that Erwin Schrödinger himself was skeptical of quantum theory — he only made the cat analogy to show how crazy it was.</p> <p>“A lot of ways that we evaluate people and what they’ve done can be super muddled if we don’t understand the history of how things came about,” he says.</p> <p>It’s that kind of learning, bridging concepts that he never assumed were related, that Henry-Rerrie really enjoys. The applications to engineering and broader society are what drew him to finance; his research and economic work at BlackRock was so fulfilling that he’s accepted an offer to return after graduation full-time.</p> <p>Longer term, Henry-Rerrie isn’t sure where exactly he’ll end up. He’s considering business school in his five-year plan and would love to end up back at MIT for that. His broader goal, at least right now, is to figure out where his skills can be put to the greatest use.</p> <p>“I’m all about finding connections. Between, I guess, very weird things. Things that don’t seem that related,” he says.</p> Stephon Henry-Rerrie Image: Jake BelcherStudents, Undergraduate, Profile, Chemical engineering, Physics, Mentoring, Student life, Business and management, Diversity and inclusion, History, History of science, School of Engineering, School of Science, School of Humanities Arts and Social Sciences The quest to understand human society scientifically In STS.047 (Quantifying People), MIT students explore the history of science from the 17th century to the present, through the eyes of statisticians and sociologists. Tue, 30 Apr 2019 13:05:01 -0400 School of Humanities, Arts, and Social Sciences <p>Is it appropriate to evaluate the causes of suicide but dismiss mental illness as a contributing factor? What happens when you talk about war deaths as colored wedges on a chart? Does that change the conversation in important ways?<br /> <br /> MIT students grappled with these and similar questions this spring in STS.047 (Quantifying People), a new subject focused on the history of the quest to understand human society scientifically. William Deringer, the Leo Marx Career Development Assistant Professor of Science, Technology, and Society, says he developed the class to enable students to explore the questions that motivate much of his own research: “Why do we invest so much trust in numbers, and what are the consequences for who we are?”<br /> <br /> Deringer has written a book on the subject, "<a href="" target="_blank">Calculated Values: Finance, Politics, and the Quantitative Age</a>" (Harvard University Press, 2018), in which he examines the history of human efforts to use statistics to influence opinions and shape policy. “Many MIT students will likely be practitioners in the data field, so I want to encourage them to think about these issues,” he says.<br /> <br /> The class has certainly gotten Jordan Browne thinking. “There’s this idea that by working with numbers people aren’t making moral judgments, but that’s a really dangerous assumption,” says Browne, a senior in the class who is majoring in mathematical economics. “This should be a required class.”<br /> <br /> In fact, STS.047 will be one of several courses featured in a new MIT undergraduate HASS concentration focused on Computational Cultures, which "brings together perspectives from the humanities and social sciences for students to understand and improve the social, cultural, and political impact of the computing tools and digital devices that shape our lives."</p> <p><strong>Are numbers neutral?</strong><br /> <br /> STS.047 covers the history of science from the 17th century to the present as seen through the eyes of early statisticians and sociologists — people who were building new fields by attempting to understand social life through quantification.<br /> <br /> One goal of the class, Deringer says, is to prompt students to consider the ways in which the tools we use to understand issues today can themselves reflect biases. “Thinking about old projects of quantification — the ways things look weird, wrong, or biased — helps you see how subjective elements might play out in current practice,” he says.<br /> <br /> In the late 1850s, for example, British nurse, social reformer, and statistician Florence Nightingale gathered mortality data from the Crimean War and created visualizations to show that wounded soldiers were dying from disease due to poor sanitation in military hospitals. Those deaths were represented as blue wedges on a diagram, prompting Nightingale to make this impassioned plea to save lives: “Expunge the blue wedges.”<br /> <br /> “That really struck me,” Deringer says. “There is some sort of strange transmutation that happens when you take data, turn it into something visual, then that is what you act on. That’s an interesting way of interacting with the world.”<br /> <br /> Students discussing the work during one class session this spring wondered if Nightingale had abstracted the problem to make it seem easier to solve, although some found it odd that she had effectively dehumanized those who had died.<br /> <br /> The students in class that day also discussed the work of 19th century French sociologist Emile Durkheim, who studied the correlation of suicide to such social circumstances as religion, marital status, and economic class. While Nightingale was using statistics in an attempt to change policy and save lives, Durkheim took an abstract approach that was less focused on solutions — and many students were unsettled by his dry assessment of the suicide data.<br /> <br /> “They’re not just statistics, they’re people too,” says Yiran He.</p> <p><strong>The complicated history of quantitative methods</strong><br /> <br /> A junior in the Department of Materials Science and Engineering, He says she signed up for STS.047 to gain insight into today’s data-driven society. “Numbers rule everything I see in the rest of my life: measurements and results in academia in science and engineering, statistics in politics and policy decisions, models in economic decisions, and everything between and beyond,” she says. “I felt it was important to understand the origins of the statistics we use.”<br /> <br /> For example, students in STS.047 learned that many tools in use today — including regression analysis — were developed through eugenics research. “These tools that every student here uses have this really insidious beginning,” Browne says.<br /> <br /> This supports a point Deringer makes right in the syllabus for STS.047. “Social science and quantitative methods have a complicated history. There is much to celebrate and also much to criticize.”<br /> <br /> This complex interplay of science and society is precisely what attracted Rhea Lin to the subject. “I wanted to take a humanities course that would give me the opportunity to reflect on how society has been impacted by science in the past and how my work as an engineer might affect people in the future,” says Lin, a senior majoring in electrical engineering and computer science.<br /> <br /> “From this class, I have learned that technology and science are not always the answer to our problems. We've studied social scientists who have thrown statistics and theories at society in questionable ways, and I think it's important to remember that science is not effective if not used correctly,” Lin says.</p> <h5>&nbsp;<br /> &nbsp;<br /> <em>Story prepared by MIT SHASS Communications<br /> Editorial and Design Director: Emily Hiestand<br /> Senior Writer: Kathryn O'Neill</em></h5> Will Deringer examines the history of human efforts to use statistics to influence opinions and shape policy. “Many MIT students will likely be practitioners in the data field,” the assistant professor says, “so I want to encourage them to think about issues, such as: Why do we invest so much trust in numbers, and what are the consequences for who we are?”Photo: Jon Sachs/SHASS Communications School of Humanities Arts and Social Sciences, Data, Ethics, History of science, Statistics, Classes and programs, History, Data visualization Jump-starting the economy with science In a new book, MIT professors say more public investment in science will create a better economy for all. Wed, 17 Apr 2019 09:42:08 -0400 Peter Dizikes | MIT News Office <p>In 1988, the U.S. federal government created a $3 billion, 15-year project to sequence the human genome. Not only did the project advance science, it hit the economic jackpot: In 2012, human genome sequencing accounted for an estimated 280,000 jobs, $19 billion in personal income, $3.9 billion in federal taxes, and $2.1 billion in state and local taxes. And all for a price of $2 per year per U.S. resident.</p> <p>“It’s an incredible rate of return,” says MIT economist Simon Johnson.</p> <p>It’s not just genomics that pays off. Every additional $10 million in public funding granted to the National Institutes of Health, according to one MIT study, on average produces 2.7 patents and an additional $30 million in value for the private-sector firms that own those patents. When it comes to military technology, each dollar in publicly funded R&amp;D leads to another $2.50-$5.90 in private-sector investment.</p> <p>In general, “Public investment in science has very big economic returns,” says Johnson, who is the Ronald A. Kurtz Professor of Entrepreneurship at the MIT Sloan School of Management.</p> <p>Yet after a surge in science funding spurred by World War II, the U.S. has lowered its relative level of public investment in research and development — from about 2 percent of GDP in 1964 to under half of that today.</p> <p>Reviving U.S. support of science and technology is one of the best ways we can generate economic growth, according to Johnson and his MIT economist colleague Jonathan Gruber, who is the Ford Professor of Economics in MIT’s Department of Economics. And now Johnson and Gruber make that case in a new book, “Jump-Starting America: How Breakthrough Science Can Revive Economic Growth and the American Dream,” published this month by PublicAffairs press.</p> <p>In it, the two scholars contend that pumping up public investment in science would create not only overall growth but also better jobs throughout the economy, in an era when stagnating incomes have caused strain for a large swath of Americans.</p> <p>“Good jobs are for MIT graduates, but they’re also for people who don’t finish college. They’re for people who drop out of high school,” says Johnson. “There’s a tremendous amount of anxiety across the country.”</p> <p><strong>Hello, Columbus</strong></p> <p>Indeed, spurring growth across the country is a key theme of “Jump-Starting America.” Technology-based growth in the U.S. has been focused in a few “superstar” cities, where high-end tech jobs have been accompanied by increased congestion and sky-high housing prices, forcing out the less well-off.</p> <p>“The prosperity has been concentrated in some places where it’s become incredibly expensive to live and work,” Johnson says. That includes Silicon Valley, San Francisco, New York, Los Angeles, Seattle, the Washington area, and the Boston metro area.</p> <p>And yet, Johnson and Gruber believe, the U.S. has scores of cities where the presence of universities combined with industrial know-how could produce more technology-based growth. Some already have: As the authors discuss in the book, Orlando, Florida, is a center for high-tech computer modeling and simulation, thanks to the convergence of federal investment, the growth of the University of Central Florida, and local backing of an adjacent research park that supports dozens of thriving enterprises.</p> <p>The Orlando case is “a modern version of what once made America the most prosperous nation on Earth,” the authors write, and they believe it can be replicated widely.</p> <p>“Let’s spread it around the country, to take advantage of where the talent is in the U.S., because there’s a lot of talent away from the coastal cities,” Johnson says.</p> <p>“Jump-Starting America” even contains a list of 102 metropolitan areas the authors think are ripe for investment and growth, thanks to well-educated work forces and affordability, among other factors. At the top of the list are Pittsburgh, Rochester, and three cities in Ohio: Cincinnati, Columbus, and Cleveland.</p> <p>The authors’ list does not include any California cities — where affordability is generally a problem — but they view the ranking as a conversation-starter, not the last word on the subject. The book’s website has an <a href="">interactive feature</a> where readers can tweak the criteria used to rank cities, and see the results.</p> <p>“We’d like people to challenge us and say, maybe we should think of the criteria differently,” Johnson says. “Everyone should be thinking about what have we got in our region, what do we need to get, and what kind of investment would make the difference here.”</p> <p><strong>A dividend on your investment</strong></p> <p>“Jump-Starting America” has received praise from scholars and policy experts. Susan Athey, an economist at Stanford University, calls the book “brilliant” and says it “brings together economic history, urban economics, and the design of incentives to build an ambitious proposal” for growth. Jean Tirole, of the Toulouse School of Economics, says the book gives a boost to industrial policy, by showing “how the government can promote innovation while avoiding the classic pitfalls” of such interventions.</p> <p>For their part, Johnson and Gruber readily acknowledge that public investment in R&amp;D is just one component of long-term growth. Continued private-sector investment, they note, is vital as well. Still, the book does devote a chapter to the limits of private investment, including the short-term focus on returns that has led many firms to scale back their own R&amp;D operations.</p> <p>“We’re very pro-private sector,” Johnson says. “I’m a professor of entrepreneurship at Sloan, and I work a lot with entrepreneurs around the world and venture capitalists. They will tell you, quite frankly … their incentives are to make money relatively quickly, given their time horizons and what their investors want. As a result they are drawn to a few sectors, including information technology, and within that more software than hardware these days.”</p> <p>As a sweetener for any program of public science investment, the authors also suggest that people should receive a kind of annual innovation dividend — a return on their tax dollars. In effect, this would be a scaled-up version of the dividend that, for instance, Alaskans receive on that state’s energy revenues.</p> <p>That would be a departure from current U.S. policy, but ultimately, Johnson and Gruber say, a departure is what we need.</p> <p>“We don’t find the existing policies from anyone compelling,” Johnson says. “So we wanted to put some ideas out there and really start a debate about those alternatives, including a return to a bigger investment in science and technology.”</p> Jon Gruber and Simon JohnsonImage: M. Scott Brauer and MIT SloanEconomics, Business and management, innovation, Invention, Books and authors, Policy, Innovation and Entrepreneurship (I&E), Technology and society, History of science, Sloan School of Management, School of Humanities Arts and Social Sciences The evolving definition of a gene Professor Gerald Fink, a pioneer in the field of genetics, delivers the annual Killian Lecture. Fri, 05 Apr 2019 15:28:04 -0400 MIT News Office <p>More than 50 years ago, scientists came up with a definition for the gene: a sequence of DNA that is copied into RNA, which is used as a blueprint for assembling a protein.</p> <p>In recent years, however, with the discovery of ever more DNA sequences that play key roles in gene expression without being translated into proteins, this simple definition needed revision, according to Gerald Fink, the Margaret and Herman Sokol Professor in Biomedical Research and American Cancer Society Professor of Genetics in MIT’s Department of Biology.</p> <p>Fink, a pioneer in the field of genetics, discussed the evolution of this definition during yesterday’s James R. Killian Jr. Faculty Achievement Award Lecture, titled, “What is a Gene?”</p> <p>“In genetics, we’ve lost a simple definition of the gene — a definition that lasted over 50 years,” he said. “But loss of the definition has spawned whole new fields trying to understand the unknown information in non-protein-coding DNA.”</p> <p>Established in 1971 to honor MIT’s 10th president, James Killian, the Killian Award recognizes extraordinary professional achievements by an MIT faculty member. Fink, who is also a member and former director of the Whitehead Institute, was honored for his achievements in developing brewer’s yeast as “the premier model for understanding the biology of eukaryotes” — organisms whose cells have nuclei.</p> <p>“He is among the very few scientists who can be singularly credited with fundamentally changing the way we approach biological problems,” says the award citation, read by Susan Silbey, chair of the MIT faculty, who presented Fink with the award.</p> <div class="cms-placeholder-content-video"></div> <p><strong>Genetic revolution</strong></p> <p>Growing in a “sleepy” town on Long Island, Fink had a keen interest in science, which spiked after the Soviets launched the first satellite to orbit the Earth.</p> <p>“In 1957, when I went out in our backyard, I was hypnotized by the new star in the sky, as Sputnik slowly raced toward the horizon,” he said. “Overnight, science became a national priority, energized by the dread of Soviet technology and technological superiority.”</p> <p>After earning his bachelor’s degree at Amherst College, Fink began studying yeast as a graduate student at Yale University, and in 1976, he developed a way to insert any DNA sequence into yeast cells.</p> <p>This discovery transformed biomedical research by allowing scientists to program yeast to produce any protein they wanted, as long as they knew the DNA sequence of the gene that encoded it. It also proved industrially useful: More than half of all therapeutic insulin is now produced by yeast, along with many other drugs and vaccines, as well as biofuels such as ethanol.</p> <p>At that time, scientists were operating with a straightforward definition of the gene, based on the “central dogma” of biology: DNA makes RNA, and RNA makes proteins. Therefore, a gene was defined as a sequence of DNA that could code for a protein. This was convenient because it allowed computers to be programmed to search the genome for genes by looking for specific DNA sequences bracketed by codons that indicate the starting and stopping points of a gene.</p> <p>In recent decades, scientists have done just that, identifying about 20,000 protein-coding genes in the human genome. They have also discovered genetic mechanisms involved in thousands of human diseases. Using new tools such as CRISPR, which enables genome editing, cures for such diseases may soon be available, Fink believes.</p> <p>“The definition of a gene as a DNA sequence that codes for a protein, coupled with the sequencing of the human genome, has revolutionized molecular medicine,” he said. “Genome sequencing, along with computational power to compare and analyze genomes, has led to important insights into basic science and disease.”</p> <p>However, he pointed out, protein-coding genes account for just 2 percent of the entire human genome. What about the rest of it? Scientists have traditionally referred to the remaining 98 percent as “junk DNA” that has no useful function.</p> <p>In the 1980s, Fink began to suspect that this junk DNA was not as useless as had been believed. He and others discovered that in yeast, certain segments of DNA could “jump” from one location to another, and that these segments appeared to regulate the expression of whatever genes were nearby. This phenomenon was later observed in human cells as well.</p> <p>“That alerted me and others to the fact that ‘junk DNA’ might be making RNA but not proteins,” Fink said.</p> <p>Since then, scientists have discovered many types of non-protein-coding RNA molecules, including microRNAs, which can block the production of proteins, and long non-coding RNAs (lncRNAs), which have many roles in gene regulation.</p> <p>“In the last 15 years, it has been found that these are critical for controlling the gene expression of protein-coding genes,” Fink said. “We’re only now beginning to visualize the importance of this formerly invisible part of the genome.”</p> <p>Such discoveries demonstrate that the traditional definition of a gene is inadequate to encompass all of the information stored in the genome, he said.</p> <p>“The existence of these diverse classes of RNA is evidence that there is no single physical and functional unit of heredity that we can call the gene,” he said. “Rather, the genome contains many different categories of informational units, each of which may be considered a gene.”</p> <p><strong>“A community of scholars”</strong></p> <p>In selecting Fink for the Killian Award, the award committed also cited his contributions to the founding of the Whitehead Institute, which opened in 1982. At the time, forming a research institute that was part of MIT yet also its own entity was considered a “radical experiment,” Fink recalled.</p> <p>Though controversial at the time, with heated debate among the faculty, establishing the Whitehead Institute laid the groundwork for many other research institutes that have been established at MIT, and also helped to attract biotechnology companies to the Kendall Square area, Fink said.</p> <p>“As we now know, MIT made the right decision. The Whitehead turned out to be a successful pioneer experiment that in my opinion led to the blossoming of the Kendall Square area,” he said.</p> <p>Fink was hired as one of the first faculty members of the Whitehead Institute, and served as its director from 1990 to 2001, when he oversaw the Whitehead’s contributions to the Human Genome Project. He recalled that throughout his career, he has collaborated extensively not only with other biologists, but with MIT colleagues in fields such as physics, chemical engineering, and electrical engineering and computer science.</p> <p>“MIT is a community of scholars, and I was welcomed into the community,” he said.</p> MIT Professor Gerald Fink delivers the 2018-2019 James R. Killian Jr. Faculty Achievement Award Lecture, titled, “What is a Gene?”Photo: Jake BelcherAwards, honors and fellowships, Faculty, Biology, DNA, RNA, Genetics, History of science, School of Science, Whitehead Institute Tim Berners-Lee named FT “Boldness in Business” Person of the Year CSAIL researcher and web inventor recognized for his startup inrupt, which aims to give users more control over their data. Mon, 18 Mar 2019 10:25:01 -0400 Adam Conner-Simons | CSAIL <p>The week that his invention of the World Wide Web <a href="" target="_blank">turned 30</a>, MIT professor Sir Tim Berners-Lee has been named the Financial Times’ <a href="" target="_blank">Person of the Year</a> in their special “Boldness in Business” issue.</p> <p>Berners-Lee was honored for his new startup inrupt, which emerged out of work at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) developing the open-data platform Solid.</p> <p>Solid aims to give users ownership over their data by building decentralized social applications.</p> <p>"Right now we really have the worst of all worlds, in which people not only cannot control their data, but also can’t really use it, because it’s spread across a number of different silo-ed websites,” says Berners-Lee. “Our goal is to ‘re-decentralize the web’ and develop a web architecture that gives users more control over the information they provide to applications.”</p> <p>Solid has produced some 50,000 so-called personal online data stores (PODs) that are being experimented on by thousands of developers across more than 25 countries. His company is also collaborating with partners like UK’s National Health Service to explore growing the scale of Solid, and intends to launch a user product by the end of the year.</p> <p>In the FT article, Berners-Lee acknowledges the challenges of breaking through with a new paradigm in a climate where companies have vested interests in maintaining their data ecosystem. But he retains a healthy optimism that recent concerns about data privacy have created more momentum for a project like this.</p> <p>“It is rocket science. It is tricky. Things can blow up on you,” Berners-Lee told FT. “But we know how to fire rockets into the sky. We should be able to build constructive social networks.”</p> <p>Besides his responsibilities at CSAIL, Berners-Lee is director of the <a href="" target="_blank">World Wide Web Consortium</a>, which develops web standards, specifications, and tool, as well as director of the <a href="" target="_blank">World Wide Web Foundation</a>, which does advocacy related to “a free and open web for everyone.”</p> <p>He is the 3Com Founders Professor of Engineering in the Department of Electrical Engineering and Computer Science at MIT as well as a recipient of the A.C.M. Turing Award, often described as “the Nobel Prize of computing,” for inventing the web and developing the protocols that spurred its global use.</p> <p>“Tim’s contributions to computer science have fundamentally transformed the world, and his more recent work with inrupt is poised to do the same,” says CSAIL Director Daniela Rus. “All of us at the lab — and MIT more broadly — are so very proud of him and excited to see how his efforts will continue to impact the way that people use and share data.”</p> MIT Professor Tim Berners-Lee is being honored for work on a new startup, inrupt, aimed at "re-decentralizing the web."Photo: Henry ThomasAwards, honors and fellowships, Faculty, Internet, Computer Science and Artificial Intelligence Laboratory (CSAIL), History of science, Privacy, Invention, School of Engineering, Electrical Engineering & Computer Science (eecs), Technology and society, Computer science and technology MIT celebrates 50th anniversary of historic moon landing Symposium featuring former astronauts and other Apollo mission luminaries examines the program’s legacy. Fri, 15 Mar 2019 15:28:04 -0400 Jennifer Chu | MIT News Office <p>On Sept. 12, 1962, in a speech given in Houston to pump up support for NASA’s Apollo program, President John F. Kennedy shook a stadium crowd with the now-famous quote: “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard.”</p> <p>As he delivered these lines, engineers in MIT’s Instrumentation Laboratory were already taking up the president’s challenge. One year earlier, NASA had awarded MIT the first major contract of the Apollo program, charging the Instrumentation Lab with developing the spacecraft’s guidance, navigation, and control systems that would shepherd astronauts Michael Collins, Buzz Aldrin, and Neil Armstrong to the moon and back.</p> <p>On July 20, 1969, the hard work of thousands paid off, as Apollo 11 touched down on the lunar surface, safely delivering Armstrong and Aldrin ScD ’63 as the first people to land on the moon.</p> <p>On Wednesday, MIT’s Department of Aeronautics and Astronautics (AeroAstro) celebrated the 50th anniversary of this historic event with the daylong symposium “Apollo 50+50,” featuring former astronauts, engineers, and NASA adminstrators who examined the legacy of the Apollo program, and MIT faculty, students, industry leaders, and alumni who envisioned what human space exploration might look like in the next 50 years.</p> <p>In welcoming a large audience to Kresge Auditorium, some of whom sported NASA regalia for the occasion, Daniel Hastings, head of AeroAstro, said of today’s prospects for space exploration: “It’s the most exciting time since Armstrong and Aldrin landed on the moon.”</p> <p>The event kicked off three days of programming for <a href="">MIT Space Week</a>, which also included the Media Lab’s “<a href="">Beyond the Cradle: Envisioning a New Space Age</a>” on March 14, and the student-led “<a href="">New Space Age Conference</a>” on March 15.</p> <p><strong>“We could press on”</strong></p> <p>As a “baby boomer living through Apollo,” retired astronaut Charles Bolden, NASA’s 12th administrator, said the Apollo program illustrated “how masterful we were at overcoming adversity.” In a keynote address that opened the day’s events, Bolden reminded the audience that, at the time the ambitious program got underway in the 1960s, the country was in the violent thick of the civil rights movement.</p> <p><strong>“</strong>We were killing each other in the streets,” Bolden said. “And yet we had an agency like NASA, and a small group of people, who were able to bear through everything and land on the moon. … We could recognize there were greater things we could do as a people, and we could press on.”</p> <p>For MIT’s part, the push began with a telegram on Aug. 9, 1961, to Charles Stark Draper, director of the Instrumentation Laboratory, notifying him that NASA had selected the MIT lab “to develop the guidance navigation system of the Project Apollo spacecraft.” Draper, who was known widely as “Doc,” famously assured NASA of MIT’s work by volunteering himself as a crew member on the mission, writing to the agency that “if I am willing to hang my life on our equipment, the whole project will surely have the strongest possible motivation.”</p> <p>This of course proved unnecessary, and Draper went on to lead the development of the guidance system with “unbounded optimism,” as his former student and colleague Lawrence Young, the MIT Apollo Program Professor, recalled in his remarks.</p> <p>“We owe the lighting of our fuse to Doc Draper,” Young said.</p> <p>At the time that MIT took on the Apollo project, the Instrumentation Laboratory, later renamed Draper Laboratory, took up a significant footprint, with 2,000 people and 15 buildings on campus, dedicated largely to the lunar effort.</p> <p>“The Instrumentation Lab dwarfed the [AeroAstro] department,” said Hastings, joking, “it was more like the department was a small pimple on the Instrumentation Lab.”</p> <p><strong>Apollo remembered</strong></p> <p>In a highlight of the day’s events, NASA astronauts Walter Cunningham (Apollo 7) and Charles Duke SM ’64 (Apollo 16), and MIT Instrumentation Laboratory engineers Donald Eyles and William Widnall ’59, SM ’62 — all from the Apollo era — took the stage to reminisce about some of the technical challenges and emotional moments that defined the program.</p> <p>One of the recurring themes of their conversation was the observation that things simply got done faster back then. For instance, Duke remarked that it took just 8.5 years from when Kennedy first called for the mission, to when Armstrong’s boots hit the lunar surface.</p> <p>“I would argue the proposal for such a mission would take longer [today],” Duke said to an appreciative rumble from the audience.</p> <p>The Apollo Guidance Computer, developed at MIT, weighed 70 pounds, consumed 55 watts of power — half the wattage of a regular lightbulb — and took up less than 1 cubic foot inside the spacecraft. The system was one of the first digital flight computers, and one of the first computers to use integrated circuits. &nbsp;</p> <p>Eyles and Widnall recalled in detail the technical efforts that went into developing the computer’s hardware and software. “If you’re picturing [the computer code] on a monitor, you’d be wrong,” Eyles told the audience. “We were writing the program on IBM punch cards. That clunking mechanical sound of the key-punch machine was the soundtrack to creating the software.”</p> <p>Written out, that code famously amounted to a stack of paper as tall as lead software engineer Margaret Hamilton — who was not able to participate in Wednesday’s panel but attended the symposium dinner that evening.</p> <p>In the end, the Apollo Guidance Computer succeeded in steering 15 space flights, including nine to the moon, and six lunar landings. That’s not to say that the system didn’t experience some drama along the way, and Duke, who was the capsule communicator, or CAPCOM, for Apollo 11, remembers having to radio up to the spacecraft during the now-famous rocky landing.</p> <p>“When I heard the first alarm go off during the braking phase, I thought we were dead in the water,” Duke said of the first in a series of alerts that the Apollo astronauts reported, indicating that the computer was overloaded, during the most computationally taxing phase of the mission. The spacecraft was several miles off course and needed to fly over a “boulder field,” to land within 60 seconds or risk running out of fuel.</p> <p>Flight controllers in Houston’s Mission Control Center determined that if nothing else went wrong, the astronats, despite the alarms, could proceed with landing.</p> <p>“Tension was high,” Duke said of the moment. “You didn’t want to touch down on a boulder and blow a nozzle, and spoil your whole day.”</p> <p>When the crew finally touched down on the Sea of Tranquility, with Armstrong’s cool report that “the Eagle has landed,” Duke, too wound-up to properly verbalize the callback “Tranquility,” recalls “I was so excited … it came out as ‘Twang,’ or something like that.’ The tension — it was like popping a balloon.”</p> <p>Since the Apollo era, NASA has launched astronauts on numerous missions, many of whom are MIT graduates. On Wednesday, 13 of those graduates came onstage to be recognized along with the Apollo crew.</p> <p>In introducing them to the audience, Jeffrey Hoffman, a former astronaut and now AeroAstro professor of the practice, noted MIT’s significant representation in the astronaut community. For instance, in the five missions to repair the Hubble Space Telescope, which comprised 24 spacewalks, 13 of those were performed by MIT graduates.</p> <p>“That’s pretty cool,” Hoffman said.</p> <p><strong>On the horizon</strong></p> <p>The Apollo moon rocks that were were brought back to Earth have “evolved our understanding of how the moon formed,” said Maria Zuber, MIT’s vice president for research and the E.A. Griswold Professor of Geophysics in the Department of Earth, Atmospheric and Planetary Sciences. These rocks “vanquished” the idea that the moon originally formed as a cold assemblage of rocks and “foo foo dust,” she said.</p> <p>Instead, after carefully analyzing samples from Apollo 11 and other missions, scientists at MIT and elsewhere have found that the moon was a dynamic body, with a surface that at one time was entirely molten, and a metallic core, or “dynamo,” powering an early, lunar magnetic field. Even more provocative was the finding that the moon was not in fact “bone-dry,” but actually harbored water — an idea that Zuber said was virtually unpublishable until an MIT graduate reported evidence of water in Apollo samples, after which the floodgates opened in support of the idea.</p> <p>To consider the next 50 years of space exploration, the MIT symposium featured a panel of faculty members — Paulo Lozano, Danielle Wood, Richard Binzel, and Sara Seager — who highlighted, respectively, the development of tiny thrusters to power miniature spacecraft; an effort to enable wider access to microgravity missions; an MIT student-designed mission (REXIS) that is currently analyzing the near-Earth asteroid Bennu; and TESS and ASTERIA, satellite missions that are currently in orbit, looking for planets and possibly, life, outside our solar system.</p> <p>Industry leaders also weighed in on the growing commercialization of space exploration, in a panel, moderated by Olivier de Weck, Professor of Aeronautics and Astronautics and Engineering Systems, featuring MIT alums who currently head major aerospace companies.</p> <p>Keoki Jackson, chief technology officer of Lockheed Martin, noted the pervasiveness of space-based technologies, such as GPS-dependent apps for everything from weather and news, to Uber.</p> <p>“[Commercial enterprises] have made space a taken-for-granted part of life,” said Jackson, noting later in the panel that in 2015, 1 billion GPS devices had been sold around the world. “This shows you what can happen exponentially when you come up with something truly enabling.”</p> <p>“The challenge we face is talent, and in particular, diversity,” said John Langford, CEO and founder of Aurora Flight Sciences, who noted the panel’s all-male participants as an example. “It’s an industry-wide challenge. We’re working to reform ourselves, as we move from the brigade-type technologies that we grew up with, to incorporating technologies such as computer technology and artificial intelligence.”</p> <p><strong>Future missions</strong></p> <p>In a glimpse of what the future of space exploration might hold, MIT students presented lightning talks on a range of projects, including a custom-designed drill to excavate ice on Mars, a system that makes oxygen on Mars to fuel return missions to Earth, and a plan to send CubeSats around the world to monitor water vapor as a measure of climate change.</p> <p>Audience members voted online for the best pitch, which ultimately went to <a href="">Raichelle Aniceto</a> and her presentation of a CubeSat-enabled laser communications system designed to transmit large amounts of data from the moon to Earth in just five minutes.</p> <p>In the last keynote address of the symposium, Thomas Zurbuchen, associate administrator for NASA’s Science Mission Directorate, told the audience that there is still a lot of research to be done on the moon, which he said is changing, as evidenced by new craters that have formed in the last 50 years.</p> <p>“The moon of the Apollo era is not the same moon of today,” said Zurbuchen, who noted that just this week, NASA announced it will open previously unlocked samples of soil collected by the Apollo missions.</p> <p>In closing the symposium, Dava Newman, the Apollo Program Professor of Astronautics and former NASA deputy administrator, envisioned a future dedicated to sending humans back to the moon, and ultimately to Mars.</p> <p>“I’m a rocket scientist. I got here because of Apollo, and Eleanor Roosevelt said it best: Believe in the beauty of your dreams,” Newman said. “The challenge is, within 50 years, to be boots on Mars. I think we have the brains and the doers and inspiration to really make that happen.”</p> The Apollo astronauts and engineers were joined onstage by some of MIT’s astronaut alumni, including two active astronauts, Mike Fincke and Warren Hoburg (in blue jackets).Photo: Jake BelcherAeronautical and astronautical engineering, Astronomy, Astrophysics, EAPS, Mars, NASA, Planetary science, Research, Satellites, School of Engineering, space, astronomy and planetary science, Special events and guest speakers, MIT History, History of science Chernobyl: How bad was it? A scholar’s book uncovers new material about the effects of the infamous nuclear meltdown. Tue, 05 Mar 2019 23:59:59 -0500 Peter Dizikes | MIT News Office <p>Not long after midnight on April 26, 1986, the world’s worst nuclear power accident began. Workers were conducting a test at the Chernobyl Nuclear Power Plant in the Ukraine when their operations spun out of control. Unthinkably, the core of the plant’s reactor No. 4 exploded, first blowing off its giant concrete lid, then letting a massive stream of radiation into the air.</p> <p>Notoriously, the Soviet Union kept news of the disaster quiet for a couple of days. By the time the outside world knew about it, 148 men who had been on the Chernobyl site — firefighters and other workers — were already being treated in the special radiation unit of a Moscow hospital. And that was just one sliver of the population that wound up seeking medical care after Chernobyl.&nbsp;</p> <p>By the end of the summer of 1986, Moscow hospitals alone had treated about 15,000 people exposed to Chernobyl radiation. The Soviet republics of Ukraine and Belarus combined to treat about 40,000 patients in hospitals due to radiation exposure in the same period of time; in Belarus, about half were children.</p> <p>And while 120,000 residents were hastily evacuated from the “Zone of Alienation” around Chernobyl, about 600,000 emergency workers eventually went into the area, trying to seal the reactor and make the area safe again. About 31,000 soldiers camped out near the reactor, where radioactivity reached about 1,000 times the normal levels within a week, and contaminated the drinking water.</p> <p>Which leads to the question: How bad was Chernobyl? A 2006 United Nations report contends Chernobyl caused 54 deaths. But MIT Professor Kate Brown, for one, is skeptical about that figure. As a historian of science who has written extensively about both the Soviet Union and nuclear technology, she decided to explore the issue at length.</p> <p>The result is her new book, “Manual for Survival: A Chernobyl Guide to the Future,” published this month by W.W. Norton and Co. In it, Brown brings new research to bear on the issue: She is the first historian to examine certain regional archives where the medical response to Chernobyl was most extensively chronicled, and has found reports and documents casting new light on the story.</p> <p>Brown does not pinpoint a death-toll number herself. Instead, through her archival research and on-the-ground reporting, she examines the full range of ways radiation has affected residents throughout the region, while explaining how Soviet politics helped limit our knowledge of the incident.</p> <p>“I wrote this book so it’s something we take a look at more seriously,” says Brown, a professor in MIT’s Program in Science, Technology, and Society.</p> <p><strong>Lying to themselves </strong></p> <p>To see how the effects of Chernobyl could be much more widespread than previously acknowledged, consider a pattern Brown observed from her archival work: Scientists and officials at the local and regional levels examined the effects of Chernobyl on people quite extensively, even performing controlled studies and other robust techniques, but other Soviet officials minimized the evidence of major health consequences.</p> <p>“Part of the problem is the Soviets lied to themselves,” says Brown. “On the ground it [the impact] was very clear, but at higher levels, there were ministers whose job was to report good health.” Soviet officials, Brown adds, would “massage the numbers” as the data ascended in the state bureaucracy.</p> <p>“Everybody was making the record look better by the time it go to Moscow,” Brown says. “And I can show that.”</p> <p>Then too, the effects of Chernobyl’s radiation have been diffuse. As Brown discovered, 298 workers at a wool factory in the city of Chernihiv, about 50 miles from Chernobyl, were given “liquidator status” due to their health problems. This is the same designation applied to emergency personnel working at the Chernobyl site itself.</p> <p>Why were the wool workers so exposed to radiation? As Brown found after investigating the Chernihiv wool factory itself, Soviet authorities had workers kill livestock from the Zone of Alienation — and then send their useable parts for processing. The wool factory workers had become sick because they were dealing with wool from highly contaminated sheep. Such scenarios may have been significantly overlooked in some Chernobyl assessments.</p> <p>A significant section of “Manual for Survival” — the title comes from some safety instructions written for local residents — also explores the accident’s effects on the region’s agricultural economy. In Belarus, one-third of milk and one-fifth of meat was too contaminated to use in 1987, according to the official in charge of food production in the state, and levels became worse the following year. At the same time, in the Ukraine, between 30 and 90 percent of milk in “clean” areas was judged too contaminated to drink.</p> <p>As part of her efforts to study Chernobyl’s effects in person, Brown also ventured into the forests and marshes near Chernobyl, accompanying American and Finnish scientists &nbsp;— who are among the few to have extensively studied the area’s wildlife in the field. They have found, among other things, the decimation of parts of the ecosystem, including dramatically fewer pollinators (such as bees) in higher-radiation places, and thus radically reduced numbers of fruit trees and shrubs. Brown also directly addresses scientific disagreements over such findings, while noting that some of the most negative conclusions about the regional ecosystems have stemmed from extensive on-the-ground investigations of it.</p> <p>Additionally, disputes over the effects of Chernobyl also rumble on because, as Brown acknowledges, it is “easy to deny” that any one occurence of cancer is due to radiation exposure. As Brown notes in the book, “a correlation does not prove a connection,” despite increased rates of cancer and other illnesses in the region.</p> <p>Still, in “Manual for Survival,” Brown does suggest that the higher end of existing death estimates seems plausible. The Ukrainian state pays benefits to about 35,000 people whose spouses apparently died from Chernobyl-caused illnesses. Some scientists have told her they think 150,000 deaths is a more likely baseline for the Ukraine alone. (There are no official or unofficial counts for Belarus and western Russia.)</p> <p><strong>Chernobyl: This past isn’t even past</strong></p> <p>Due to the long-term nature of some forms of radiation, Chernobyl’s effects continue today — to an extent that is also under-studied. In the book’s epilogue, Brown visits a forest in the Ukraine where people pick blueberries for export, with each batch being tested for radiation. However, Brown observed, bundles of blueberries over the accepted radiation limit are not necessarily discarded. Instead, berries from those lots are mixed in with cleaner blueberries, so each remixed batch as a whole falls under the regulatory limit. People outside the Ukraine, she writes, “may wake to a breakfast of Chernobyl blueberries” without knowing it. &nbsp;</p> <p>Brown emphasizes that her goal is not primarily to alarm readers, but to push research forward. She says she would like her audience — general readers, undergraduates, scientists — to think deeply about how apparently settled science may sometimes rely on contingent conclusions developed in particular political circumstances.</p> <p>“I would like scientists to know a bit more about the history behind the science,” Brown says.</p> <p>Other scholars say “Manual for Survival” is an important contribution to our understanding of Chernobyl. J.R. McNeill, a historian at Georgetown University, says Brown has shed new light on Chernobyl by illuminating “decades of official efforts to suppress its grim truths.” Alison MacFarlane, director of the Institute for International Science and Technology Policy at George Washington University, and Former director of the Nuclear Regulatory Commission, says the book effectively “uncovers the devastating effects” of Chernobyl.</p> <p>For her part, Brown says one additional aim in writing the book was to help us remind ourselves that our inventions and devices are fallible. We need to be vigilant to avoid future disasters along the lines of Chernobyl.</p> <p>“I think it could be a guide to the future if we’re not a little bit more thoughtful, and a little more transparent” than the Soviet officials were, Brown says.</p> Kate Brown, author of “Manual for Survival: A Chernobyl Guide to the Future.”Courtesy of Kate BrownHistory, Faculty, Research, Books and authors, Energy, Technology and society, Program in STS, Nuclear power and reactors, History of science, School of Humanities Arts and Social Sciences Computing the future Fireside chat brings together six Turing Award winners to reflect on their field and the MIT Stephen A. Schwarzman College of Computing. Tue, 05 Mar 2019 11:50:02 -0500 Adam Conner-Simons | CSAIL <p>As part of the public launch of the Stephen A. Schwarzman College of Computing, MIT hosted a special fireside chat Wednesday, Feb.27, at Kresge Auditorium that brought together six MIT professors who have received the Association for Computing Machinery’s esteemed A.M. Turing Award, often described as “the Nobel Prize for computing.”</p> <p>Moderated by Professor Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the conversation included Tim Berners-Lee, the 3Com Founders Professor of Engineering; Shafi Goldwasser, the RSA Professor of Electrical Engineering and Computer Science; Butler Lampson, technical fellow at the Microsoft Corporation and adjunct professor of computer science at MIT; Barbara Liskov, Institute Professor; Ronald Rivest, Institute Professor; and Michael Stonebraker, CTO of Paradigm4 and of Tamr Inc. and adjunct professor of computer science at MIT. (Other MIT Turing Award winners include Professor Silvio Micali, Professor Emeritus Fernando “Corby” Corbato, and the late Professor Marvin Minsky.)</p> <p>Rus first briefly highlighted the accomplishments of the Turing winners, from Lampson’s contributions to the growth of personal computers to how Berners-Lee and Rivest’s work has fundamentally transformed global commerce.</p> <p>“Imagine what the world would be like without these achievements in AI, databases, cryptography, and more,” said Rus. “Just try to imagine a day without the World Wide Web and all that it enables — no online news, no electronic transactions, no social media.”</p> <p>Coming off less as a panel than a casual conversation among friends, the wide-ranging dialogue reflected the CSAIL colleagues’ infectious enthusiasm for each other’s work. One theme was the serendipity of computer science and how often the panelists’ breakthroughs in one area of research ended up having major impacts in other, completely unexpected domains. For example, Goldwasser discussed her work on zero-knowledge proofs and their use in fields such as cloud computing and machine learning that didn’t even exist when she and Micali first dreamed them up. Rivest later joked that the thriving study of quantum computing has been largely driven by the desire to “break” his RSA (named for Rivest-Shamir-Adelman) encryption algorithm.</p> <p>With a broad lens looking toward the future, panelists also discussed how to create more connections between their work and topics such as climate change and brain research. Liskov cited medical technology, and how more effective data collection could allow doctors to spend less time on their computers and more time with patients. Lampson spoke of the importance of developing more specialized hardware, like Google has with its tensor processing unit.</p> <p>Another recurring theme during the panel was a hope that the new college can also keep MIT at the center of the conversation about the potential adverse effects of computing technologies.</p> <p>“The future of the field isn’t just building new functionality for the good, but thinking about how it can be abused,” Rivest said. “It will be crucially important to teach our students how to think more like adversaries.”</p> <p>The group also reminisced on the <a href="" target="_blank">letter they penned</a> in the Tech student newspaper in 2017 calling for the creation of a computing school.</p> <p>“Since we wrote that letter, the MIT administration has created a college and raised $1 billion for a new building and 50 professors,” said Stonebraker. “The fact that they’ve done this all from a standing start in 16 months is truly remarkable.”</p> <p>The laureates agreed that one of MIT’s core goals should be to teach computational skills in a bidirectional way: that is, for MIT’s existing schools to inform the college’s direction, and for the college to also teach concepts of “computational thinking” that are more generalizable than any one programming language or algorithmic framework.</p> <p>“I think we do a reasonable job of training computer scientists, but one mission of the college will be to teach the right kinds of computing skills to the rest of campus,” said Stonebraker. “One of the big challenges the new dean is going to face is how to organize all that.”</p> <p>The panelists also reflected on MIT’s unique positioning to be able to continue to study tough “moonshot” problems in computing that require more than just incremental progress.</p> <p>“As the world’s leading technological university, MIT has an obligation to lead the forefront of research rather than follow industry,” Goldwasser said. “What separates us from industrial product — and even from other research labs — is our ability to pursue basic research as a pure metric rather than for dollar signs.”</p> MIT A.M. Turing Award winners discuss MIT's new College of Computing: (l-r) Sir Tim Berners-Lee, Shafi Goldwasser, Butler Lampson, Barbara Liskov, Ronald Rivest, Michael Stonebraker, and Daniela Rus. 
Photo: Rose LincolnMIT Schwarzman College of Computing, MIT History, Computer science and technology, Special events and guest speakers, Faculty, History of science, Awards, honors and fellowships, Computer Science and Artificial Intelligence Laboratory (CSAIL), School of Engineering, Artificial intelligence, Machine learning, Electrical engineering and computer science (EECS) Twenty-five ways in which MIT has transformed computing From digital circuits to ingestible robots, the Institute has helped spearhead key innovations in the technology revolution. Mon, 25 Feb 2019 14:40:01 -0500 Adam Conner-Simons | Rachel Gordon | MIT CSAIL <p>This month MIT is celebrating the launch of the new $1 billion MIT Stephen A. Schwarzman College of Computing. To help commemorate the event, here’s a list of 25 ways in which MIT has already transformed the world of computing technology.</p> <p><strong>1937: Digital circuits</strong></p> <p>Master’s student Claude Shannon showed that the principles of true/false logic could be used to represent the on-off states of electric switches — a concept that served as the foundation of the field of digital circuits, and, therefore, the entire industry of digital computing itself.</p> <p><strong>1944: The digital computer </strong></p> <p>The first digital computer that could operate in real-time came out of Project Whirlwind, a initiative during World War II in which <a href="" target="_blank">MIT worked with the U.S. Navy</a> to develop a universal flight simulator. The device’s success led to the creation of MIT Lincoln Laboratory in 1951.</p> <p><strong>1945: Memex</strong></p> <p>Professor Vannevar Bush proposed a data system called a “Memex” that would allow a user to “<a href="" target="_blank">store all his books, records, and communications</a>” and retrieve them at will — a concept that inspired the early hypertext systems that led, decades later, to the World Wide Web.</p> <p><strong>1958: Functional programming</strong></p> <p>The first functional programming language was invented at MIT by Professor John McCarthy. Before LISP, programming had difficulty juggling multiple processes at once because it was “procedural” (like cooking a recipe). Functional languages let you describe required behaviors more simply, allowing work on much bigger problems than ever before.</p> <p><strong>1959: The fax </strong></p> <p>In trying to understand the words of a strongly-accented colleague over the phone, MIT student Sam Asano was frustrated that they couldn’t just draw pictures and instantly send them to each other — so he created <a href="">a technology to transmit scanned material</a> through phone lines. His fax machine was licensed to a Japanese telecom company before becoming a worldwide phenomenon.</p> <p><strong>1962: The multiplayer video game</strong></p> <p>When a PDP-1 computer arrived at MIT’s Electrical Engineering Department, a group of crafty students — including Steven “Slug” Russell from Marvin Minsky’s artificial intelligence group — went to work creating “<a href="">SpaceWar!</a>,” a space-combat video game that became very popular among early programmers and is considered the world’s first multiplayer game. (Play it <a href="">here</a>.)</p> <p><strong>1963: The password</strong></p> <p>The average person has <a href="" target="_blank">13 passwords</a> — and for that you can thank MIT’s Compatible Time-Sharing System, which by most accounts established the first computer password. “We were setting up multiple terminals which were to be used by multiple persons but with each person having his own private set of files,” Professor Corby Corbato <a href="">told WIRED</a>. “Putting a password on for each individual user as a lock seemed like a very straightforward solution.”</p> <p><strong>1963: Graphical user interfaces</strong></p> <p>Nearly 50 years before the iPad, an MIT PhD student had already come up with the idea of <a href="">directly interfacing with a computer screen</a>. The “Sketchpad” developed by&nbsp;Ivan Sutherland PhD ’63 allowed users to draw geometric shapes with a touch-pen, pioneering the practice of “computer-assisted drafting” — which has proven vital for architects, planners, and <a href="">even toddlers</a>.</p> <p><strong>1964: Multics</strong></p> <p>MIT spearheaded the time-sharing system that inspired UNIX and <a href="">laid the groundwork</a> for many aspects of modern computing, from hierarchical file systems to buffer-overflow security. Multics furthered the idea of the computer as a “utility” to be used at any time, like water or electricity.</p> <p><strong>1969: Moon code</strong></p> <p>Margaret Hamilton led the MIT team that coded <a href="">the Apollo 11 navigation system</a>, which landed astronauts Neil Armstrong and Buzz Aldrin&nbsp;ScD ’63 on the moon. The robust software overrode a command to switch the flight computer’s priority system to a radar system, and no software bugs were found on any crewed Apollo missions.</p> <p><strong>1971: Email</strong></p> <p>The first email to ever travel across a computer network was sent to <a href="">two computers that were right next to each other</a> — and it came from MIT alumnus Ray Tomlinson '65 when he was working at spinoff BBN Technologies. (He’s the one you can credit, or blame, for <a href="" target="_blank">the @ symbol</a>.)</p> <p><strong>1973: The PC</strong></p> <p><a href="">MIT Professor Butler Lampson</a> founded Xerox’s Palo Alto Research Center (PARC), where his work earned him the title of “father of the modern PC.” The Xerox Alto platform was used to create the first graphical user interface (GUI), the first bitmapped display, and the first “What-You-See-Is-What-You-Get” (WYSIWYG) editor.</p> <p><strong>1977: Data encryption</strong></p> <p>E-commerce was first made possible by the MIT team behind the RSA algorithm, a method of data encryption based on the concept of <a href="">how difficult it is to factor huge prime numbers</a>. Who knew that math would be why you can get your last-minute holiday shopping done?</p> <p><strong>1979: The spreadsheet</strong></p> <p>In 1979, Bob Frankston '70 and Dan Bricklin '73 worked late into the night on an MIT mainframe to create VisiCalc, <a href="">the first electronic spreadsheet</a>, which sold more than 100,000 copies in its first year. Three years later, Microsoft got into the game with “Multiplan,” a program that later became Excel.</p> <p><strong>1980: Ethernet</strong></p> <p>Before there was Wi-Fi, there was Ethernet — the networking technology that lets you get online with a simple cable plug-in. Co-invented by <a href="">MIT alumnus Bob Metcalfe</a> '69, who was <a href="">part of MIT’s Project MAC team</a> and later went on to found 3Com, Ethernet <a href="">helped make the Internet</a> the fast, convenient platform that it is today.</p> <p><strong>1980: The optical mouse</strong></p> <p>Undergrad Steve Kirsch '80 was the first to patent <a href="">an optical computer mouse</a> — he had wanted to make a “pointing device” with a minimum of precision moving parts — and went on to found Mouse Systems Corp. (He also <a href="">patented the method of tracking online ad impressions</a> through click-counting.)</p> <p><strong>1983: The growth of freeware</strong></p> <p>Early AI Lab programmer Richard Stallman was a major pioneer in hacker culture and the free-software movement through his <a href="">GNU Project</a>, which set out to develop a free alternative to the Unix OS, and laid the groundwork for Linux and other important computing innovations.</p> <p><strong>1985: Spanning tree algorithm </strong></p> <p>Radia Perlman '73, SM '76, PhD '88 hates when people call her “the mother of the Internet,” but her work developing the Spanning Tree Protocol <a href="">was vital</a> for being able to route data across global computer networks. (She also created LOGO, the first programming language geared toward children.)</p> <p><strong>1994: The World Wide Web consortium (W3C)</strong></p> <p>After inventing the web, Tim Berners-Lee joined MIT and launched a consortium devoted to setting global standards for building websites, browsers, and devices. Among other things, W3C standards ensure that sites are accessible, secure, and easily “crawled” for SEO.</p> <p><strong>1999: The birth of blockchain</strong></p> <p>MIT Institute Professor Barbara Liskov’s <a href="">paper on Practical Byzantine Fault Tolerance</a> helped kickstart the field of blockchain, a widely used cryptography system. Her team’s protocol could handle high-transaction throughputs and used concepts that are vital for many of today’s blockchain platforms.</p> <p><strong>2002: Roomba</strong></p> <p>While we don’t yet have robots running errands for us, we do have robo-vacuums — and for that, we can thank MIT spinoff iRobot. The company has sold more than 20 million of its Roombas and spawned an entire industry of automated cleaning products.</p> <p><strong>2007: The mobile personal assistant</strong></p> <p>Before Siri and Alexa, there was MIT Professor Boris Katz’s <a href="">StartMobile</a>, an app that allowed users to schedule appointments, get information, and do other tasks using natural language.</p> <p><strong>2012: EdX</strong></p> <p>Led by former CSAIL director Anant Agarwal, MIT’s not-for-profit online platform with Harvard University offers free courses that have drawn <a href="">more than 18 million learners around the globe</a>, all while being open-source and nonprofit.</p> <p><strong>2013: Boston Dynamics</strong></p> <p>Professor Marc Raibert’s spinoff Boston Dynamics <a href="">builds bots like “Big Dog” and “Spot Mini” </a>that can climb, run, jump and even do back-flips. Their humanoid robot Atlas was used in the DARPA Robotics Challenge aimed at developing robots for disaster relief sites.</p> <p><strong>2016: Robots you can swallow </strong></p> <p>CSAIL Director Daniela Rus’ <a href="">ingestible origami robot</a> can unfold itself from a swallowed capsule. Using an external magnetic field, it could one day crawl across your stomach wall to remove swallowed batteries or patch wounds.</p> Whirlwind I, the first digital computer capable of real-time computationPhoto: Daderot/Wikimedia CommonsHistory, History of science, Computer science and technology, Computer Science and Artificial Intelligence Laboratory (CSAIL), Robotics, Data, Algorithms, Invention, Research, History of MIT, Alumni/ae, Faculty, Electrical Engineering & Computer Science (eecs), Aeronautical and astronautical engineering, Internet, Mathematics, Cryptography, online learning, EdX, Startups, Innovation and Entrepreneurship (I&E), School of Engineering, Sloan School of Management, MIT Schwarzman College of Computing Anna Frebel is searching the stars for clues to the universe’s origins MIT astronomer and writer investigates ancient starlight and shares her excitement about the cosmos. Tue, 01 Jan 2019 23:59:59 -0500 Jennifer Chu | MIT News Office <p>In August 2002, Anna Frebel pressed pause on her undergraduate physics studies in Germany and spent her entire life savings on a plane ticket to take her halfway around the world, to a mountaintop observatory just outside Canberra, Australia.</p> <p>She spent the next five months volunteering at the Australian National University Research School of Astronomy and Astrophysics, where astronomers had regular access to a set of world-class telescopes set atop Mount Stromlo.</p> <p>On Jan. 18, 2003, brushfires that had been burning for weeks in the surrounding forest suddenly advanced toward Canberra, whipped up by a dry, scorching wind.</p> <p>“The fire front just swept in, and it marched at about six to seven miles an hour, and it just rode right into the city. The observatory was the first to fall,” recalls Frebel, who watched the calamity from the opposite end of Canberra.</p> <p>The fires obliterated the observatory’s historic telescopes, along with several administrative buildings and even some homes of researchers living on the mountainside.</p> <p>“It was a pretty big shock,” Frebel says. “But tragedy also brings out community, and we were all helping each other, and it really bonded us together.”</p> <p>As the campus set to work clearing the ash and rebuilding the facility, Frebel decided to extend her initially one-year visit to Australia — a decision that turned out to be career-making.</p> <p>“I wasn’t going to be deterred by a burned-down observatory,” says Frebel, who was granted a tenure position this year in MIT’s Department of Physics.</p> <p><strong>Frebel’s star</strong></p> <p>Soon after the fires subsided, Frebel accepted an offer by the Australian National University to pursue a PhD in astronomy. She chose to focus her studies on a then-fledgling field: the search for the universe’s oldest stars.</p> <p>It’s believed that, immediately after the Big Bang exploded the universe into existence, clouds of hydrogen, helium, and lithium coalesced to form the very first generation of stars. These incredibly massive stellar pioneers grew out of control and quickly burned out as supernovas.</p> <p>To sustain their enormous luminosities, atoms of hydrogen and helium smashed together to create heavier elements in their cores, considered to be the universe’s first “metals” — a term in astronomy used to describe all elements that are heavier than hydrogen and helium. These metals in turn forged the second generation of stars, which researchers believe formed just half a billion years after the Big Bang.</p> <p>Since then, many stellar generations have populated the night sky, containing ever more abundant metals. Astronomers suspect, however, that those early, second-generation stars can still be found in some pockets of the universe, and possibly even in our own Milky Way.</p> <p>Frebel set out to find these oldest stars, also known as “metal-poor” stars. One of her first discoveries was HE 1327-2326, which contained the smallest amount of iron ever known, estimated at about 1/400,000 that of the Earth’s sun. Given this extremely low “metallicity,” the star was likely a second-generation star, born very shortly after the Big Bang. Until 2014, Frebel’s star remained the record-holder for the most metal-poor star ever discovered.</p> <p>The results were published in 2005 in <em>Nature</em>, with Frebel, then just two years into her PhD, as lead author.</p> <p><strong>A star turn</strong></p> <p>Frebel went on to work as a postdoc at the University of Texas at Austin, and later the Harvard-Smithsonian Center for Astrophysics, where she continued to make remarkable insights into the early universe. Most notably, in 2007, she discovered HE 1523-0901, a red giant star in the Milky Way galaxy. Frebel estimated the star to be about 13.2 billion years old — among the oldest stars ever discovered and nearly as old as the universe itself.</p> <p>In 2010, she unearthed a similarly primitive star in a nearby galaxy, that appeared to have the exact same metallic content as some of the old stars she had observed in the outskirts of our own Milky Way. This seemed to suggest, for the first time, that young galaxies like the Milky Way may “cannibalize” nearby, older galaxies, taking in their ancient stars as their own.</p> <p>“A lot more detail has come to light in the last 10 years or so, and now we’re asking questions like, not just whether these objects are out there, but exactly where did they form, and how,” Frebel says. “So the puzzle is filling in.”</p> <p>In 2012, she accepted an offer to join the physics faculty at MIT, where she continues to assemble the pieces to the early universe’s history. Much of her research is focused on analyzing stellar observations taken by the twin Magellan telescopes at the Las Campanas Observatory, in Chile. Frebel’s group makes the long trek to the observatory about three times per year to collect light from stars in the Milky Way and small satellite dwarf galaxies.</p> <p>Once they arrive at the mountaintop observatory, the astronomers adapt to a night owl’s schedule, sleeping through the day and rising close to dinner time. Then, they grab a quick bite at the observatory’s lodge before heading up the mountainside to one of the two telescopes, where they remain into the early morning hours, collecting streams of photons from various stars of interest.</p> <p>On nights when bad weather makes data collection impossible, Frebel reviews her data or she writes — about the solitary, sleep-deprived experience of observatory work; the broader search for the universe’s oldest stars; and most recently, about an overlooked scientific heroine in nuclear physics.</p> <p><strong>Engaging with the public</strong></p> <p>In addition to her academic work, Frebel makes a point of reaching out to a broader audience, to share her excitement in the cosmos. In one of many essays that she’s penned for such popular magazines as <em>Scientific American</em>, she describes the satisfied weariness following a long night’s work:</p> <p>“Already I am imagining myself drawing the thick, sun-proof shades on my window and resting my head against my pillow. The morning twilight cloaks the stars overhead, but I know they are there — burning as they have for billions of years.”</p> <p>In 2015, she published her first book, “Searching for the Oldest Stars: Ancient Relics from the Early Universe.” And just last year, she wrote and performed a 12-minute play about the life and accomplishments of Lise Meitner, an Austrian-Swedish physicist who was instrumental in discovering nuclear fission. Meitner, who worked for most of her career in Berlin, Germany, fled to Sweden during the Nazi occupation. There, she and her long-time collaborator Otto Hahn found evidence of nuclear fission. But it was Hahn who ultimately received the Nobel Prize for the discovery.</p> <p>“Scientifically, [Meitner] is absolutely in line with Marie Curie, but she was never recognized appropriately for her work,” Frebel says. “She should be a household name, but she isn’t. So I find it very important to help rectify that.”</p> <p>Frebel has given a handful of performances of the play, during which she appears in the first half, dressed in costume as Meitner. In the second half, she appears as herself, explaining to the audience how Meitner’s revelations influence astronomers’ work today.</p> <p>Getting into character is nothing new for Frebel, who, as a high school student in Gottingen, Germany, took on multiple roles in the school plays. She also took part in what she calls the “subculture of figure-rollerskating” — a competitive sport that is analogous to figure-skating, only on roller skates. During that formative time, Frebel partly credits her mother for turning her focus to science and to the women who advanced their fields.</p> <p>“When I was a teenager, my mom gave me a lot of biographies of women scientists and other notable women, and I still have a little book of Lise Meitner from when I was around 13,” Frebel says. “So I have been very familiar with her, and I do work basically on the topic that she was interested in. So I’m one of her scientific daughters.”</p> Anna FrebelImage: Bryce VickmarkAstronomy, Astrophysics, Kavli Institute, Physics, Profile, School of Science, Women in STEM, History of science, Science communications, Space, astronomy and planetary science 3 Questions: MIT goes interstellar with Voyager 2 MIT Kavli&#039;s John Richardson describes MIT&#039;s role in the historic passing of the Voyager 2 craft past the heliopause and into the interstellar medium. Mon, 10 Dec 2018 16:20:01 -0500 MIT Kavli Institute for Astrophysics and Space Research <p><em>NASA announced today that the Voyager 2 spacecraft, some 11 billion miles from home, <a href="" target="_blank">crossed the heliopause</a>, the boundary between the bubble of space governed by charged particles from our sun and the interstellar medium, or material between stars, on Nov. 5. In an historic feat for the mission, Voyager 2's plasma instrument, developed at MIT in the 1970s, is set to make the first direct measurements of the interstellar plasma.</em></p> <p><em>The twin Voyager spacecraft were launched in 1977 on a mission to explore the solar system's gas giant planets. With their initial missions achieved and expanded, the spacecraft have continued outward toward the edges of the solar system for the past four decades; today they are the most distant human-made objects from Earth. Voyager 1 is 13 billion miles from Earth and crossed into the interstellar medium in 2012, but its plasma instrument is no longer functioning.</em></p> <p><em>Several researchers from MIT are working directly with data from the Voyager 2 plasma instrument. These include the instrument's principal investigator, John Richardson, a principal research scientist in the MIT Kavli Institute for Astrophysics and Space Research, and John Belcher, the Class of 1992 Professor of Physics. Belcher was part of the original MIT team, led by Professor Herbert Bridge, that built the Voyager plasma instruments in the 1970s. Richardson answered some questions about the recent Voyager 2 discoveries.</em></p> <p><strong>Q. </strong>Why is Voyager 2 crossing the heliopause important?</p> <p><strong>A.</strong> Although Voyager 1 already crossed the heliopause in 2012, many questions about this boundary were left unanswered. The biggest was the role of the plasma that contains most of the mass of the charged particles in the solar system, which was not measured by Voyager 1. Our data have shown that there is a boundary layer 1.5 AU [139 million miles] in width inside the heliopause with enhanced densities and decreasing speeds coincident with an increase in the high energy galactic cosmic rays. We also found that the actual boundary, as defined by the plasma, occurs further in than previously thought based on energetic particle data.</p> <p><strong>Q. </strong>Was the heliopause location where you expected?</p> <p><strong>A:</strong> Before the Voyager spacecraft, the heliopause distance was uncertain by at least a factor of two. The Voyager 1 crossing at 122 AU [141 million miles] gave a position in one direction and at one time. The distance varies with time, and many models predict the heliosphere is not spherical. So we really didn't know if Voyager 2 would cross three years ago or three years from now. The distance is very similar to that at Voyager 1, suggesting the heliosphere may be close to spherical.</p> <p><strong>Q.</strong> What will Voyager 2 see next?</p> <p><strong>A:</strong> Voyager 1 has shown us that the interstellar medium near the heliopause is greatly affected by solar storms. The surges in solar wind drive shocks into the interstellar medium, which then generate plasma waves. We hope to directly measure the plasma changes at these shocks. One controversy about the heliosphere is whether there is a bow shock in the interstellar medium upstream of the heliopause that heats, compresses, and slows the plasma. Our measurements of the plasma, particularly the temperature, could help resolve this question.</p> NASA’s Voyager 1 and Voyager 2 probes are both now located beyond the heliosphere, a protective bubble created by the sun. Voyager 1 crossed the heliopause, or the edge of the heliosphere, in August 2012. Heading in a different direction, Voyager 2 crossed another part of the heliopause in November 2018.Image: NASA/JPL-CaltechSatellites, Solar System, NASA, Physics, Planetary science, Research, School of Science, Space, astronomy and planetary science, 3 Questions, Kavli Institute, History of science, Aeronautical and astronautical engineering Book explores milestones of astronomical discovery In “Dispatches from Planet 3,” Marcia Bartusiak illuminates overlooked breakthroughs and the people who made them. Tue, 18 Sep 2018 00:00:00 -0400 Peter Dizikes | MIT News Office <p>Here’s quick rule of thumb about the universe: Everything old is new again.</p> <p>Those materials being used when new stars or planets form are just recycled cosmic matter, after all. But also, even our latest scientific discoveries may not be as new as they seem.</p> <p>That’s one insight from Marcia Bartusiak’s new book, “Dispatches from Planet 3,” published by Yale University Press, a tour of major discoveries in astronomy and astrophysics that digs into the history behind these breakthroughs.</p> <p>“No discovery comes out of the blue,” says Bartusiak, professor of the practice in MIT’s Graduate Program in Science Writing. “Sometimes it takes decades of preparation for [discoveries] to be built, one brick at a time.”</p> <p>The book, drawn from her columns in <em>Natural History</em>, underscores that point by highlighting unheralded scientists whose work influenced later discoveries.</p> <p>Moreover, as Bartusiak observes in the book, recent scientific debates often echo older arguments. Take the kerfuffle last decade about whether or not Pluto should be regarded as a proper planet in our solar system. As Bartusiak recounts in the book, the same thing happened multiple times in the 19th century, when objects called Ceres, Vesta, and Juno first gained and then lost membership in the club of planets.&nbsp;</p> <p>“Ceres in the 19th century was a certified planet, along with Vesta and Juno, the big asteroids, until they got demoted into the general asteroid belt,” Bartusiak says. “Then the same thing happened again, and everyone said, ‘Poor Pluto, it’s not a planet any more.’ Well, I’m sure in the 19th century there were people going ‘Poor Ceres, it’s not a planet.’ We’ll get over it.”</p> <p>(Demoting Pluto, by the way, is a judgment Bartusiak is comfortable with: “They made the right decision. Pluto is a dwarf planet. It’s part of the Kuiper Belt. I’m sure I’ll get a lot of people mad with me, [but] it makes sense to have Pluto in that group, rather than … with the big terrestrial planets and the gas giants.”)</p> <p>One astronomer who made a crucial Pluto-related discovery was Jane X. Luu, who helped locate asteroids orbiting the sun from even farther away. Luu is just one of many women in “Dispatches from Planet 3” — although, Bartusiak says, that was not by design, but simply a consequence of hunting for the origins of important advances.&nbsp;</p> <p>“I did not have an agenda for this book,” Bartusiak says. “I have always been the type of writer that wanted to follow my nose on what the most interesting findings, discoveries, and theories were, without worrying about who was doing them.”</p> <p>But as it happens, many stories about the development of scientific knowledge involve accomplished female scientists who did not immediately become household names.</p> <p>Consider the astronomer Cecilia Payne-Gaposchkin, who in the 1920s, Bartusiak notes, “first knew that hydrogen is the major element of the universe. A major discovery! This is the fuel for stars. It was central to astronomical studies. And yet, the greatest astronomer of the time, Henry Norris Russell, made her take [the idea] out of her thesis before they would accept it at Harvard.”</p> <p>Bartusiak’s book also recounts the career of Beatrice Tinsley, an astrophysicist who in the 1970s developed important work about the ways galaxies change over time, before she died in her early 40s.</p> <p>“Who really started thinking about galaxy evolution?” Bartusiak asks. “Beatrice Tinsley, ignored when she first started doing this, [produced] one of the most accomplished PhD theses in astronomical history. She was the first to really take it seriously.”</p> <p>The notion that galaxies evolve, Bartusiak’s book reminds us, is a relatively recent concept, running counter to ages of conventional wisdom.&nbsp;</p> <p>“People thought of the universe as being serene [and that] every galaxy was like the Milky Way,” Bartusiak says. “And that was based on what they could see.” Deep in the postwar era, our empirical knowledge expanded, and so did our conception of galactic-scale activity.</p> <p>In fairness, the Milky Way is pretty placid at the moment.</p> <p>“It will get active again when we collide with Andromeda, 4 billion years from now,” Bartusiak says. “We’re lucky we’re not in the galactic center or in a very active star cluster. You have stars blowing up, and it probably would be hard for life to start if you were in an area where X-rays were raining down on you, or if a supernova was going off nearby. We’re off in a little spur in a very quiet part of the Milky Way galaxy, which has enabled life on Earth here to evolve and flourish without a cosmic incident raining havoc down upon us.”</p> <p>Bartusiak closes the book with chapters on black holes, the idea of the multiverse, and our problems in conceptualizing what it means to think that the universe had a beginning.</p> <p>“We think that black holes and gravitational waves are strange, but there may stranger things to come,” Barytusiak says. “As I say in a chapter with [Harvard theoretical physicist] Lisa Randall, experimenters and theorists used to work in tandem … and now the theorists have moved so far from observations that it’s a little frightening. There’s a need for new instrumentation, the new James Webb telescopes, the new particle accelerators.”</p> <p>Which ultimately brings Bartusiak to another part of science that definitely has precedent: the need for funding to support research.</p> <p>“The bigger the instrument, the further out you can see, or the further down into spacetime you can see, so I want people to realize that if you want these stories to continue, you’re going to need a further investment,” Bartusiak says. “But that’s what makes us a civilization. That we can take at least some of our wealth and use it to expand our knowledge about where we live. And that includes the universe, not just the Earth.”</p> Marcia Bartusiak and her new book, “Dispatches from Planet 3”Books and authors, Faculty, History of science, Science writing, Writing, Humanities, Astronomy, Astrophysics, Women, School of Humanities Arts and Social Sciences, space, Space, astronomy and planetary science Contemplating the eyes in the sky Media studies scholar Lisa Parks examines the way satellites and other aerial technologies have changed society. Fri, 20 Jul 2018 00:00:00 -0400 Peter Dizikes | MIT News Office <p>Satellites have changed the way we experience the world, by beaming back images from around the globe and letting us explore the planet through online maps and other visuals. Such tools are so familiar today we often take them for granted.</p> <p>Lisa Parks does not. A professor in MIT’s Comparative Media Studies/Writing program, Parks is an expert on satellites and their cultural effects, among other forms of aerial technology. Her work analyzes how technology informs the content of our culture, from images of war zones to our idea of a “global village.”</p> <p>“I really wanted people to think of the satellite not only as this technology that’s floating around out there in orbit, but as a machine that plays a structuring role in our everyday lives,” Parks says.</p> <p>As such, Parks thinks we often need to think more crisply about both the power and limitations of the technology. Satellite images helped reveal the presence of mass graves following the Srebrenica massacre in the 1990s Balkans war, for instance. But they became a form of “proof” only after careful follow-up reporting by journalists and other investigators who reconstructed what had happened. Satellites often offer hints about life on the ground, but not omniscience.</p> <p>“Since satellite images are so abstract and remote, they necessitate closer scrutiny, re-viewing, careful description, and interpretation in ways that other images of war do not,” Parks writes in her 2005 book “Cultures in Orbit.”</p> <p>Alternately, satellite images can open up our world — or be exclusionary. The landmark 1967 BBC show “Our World,” one of the first broadcasts to feature live global satellite video links, was touted as a global celebration. But as Parks writes, it reinforced distinctions between regions, by emphasizing “the modernity, permanence, and civilizational processes of industrial nations,” and thus “undermining the utopian assumption that satellites inevitably turned the world into a harmonic ‘global village.’”&nbsp;</p> <p>For her distinctive scholarship, Parks was hired by MIT in 2016. She studies a range of media technologies — from the content of television to drone imagery — and has co-edited five books of essays on such topics, including the 2017 volume “Life in the Age of Drone Warfare.” Parks is also the principal investigator for MIT’s Global Media Technologies and Cultures Lab, which conducts on-site research about media usage in a range of circumstances.</p> <p>“Technology and culture is what I’m interested in,” Parks says.</p> <p><strong>Big sky, then and now</strong></p> <p>Parks grew up in Southern California and Montana. Her father was a civil engineer and her mother was a social worker — a combination, Parks suggests, that may have helped shape her interests in the social effects of technology.</p> <p>As an undergraduate at the University of Montana, Parks received her BA in political science and history. She initially expected to become a lawyer but then reconsidered her career path.</p> <p>“I didn’t want to be in an office all of the time,” Parks says. So she went back to the classroom, at the University of Wisconsin at Madison, where she received her PhD in media studies. It was there that Parks’ attention really turned to the skies and the technologies orbiting in them. She wrote a research paper on satellites that turned into both her dissertation and first book. Parks then took a job at the University of California at Santa Barbara, where she taught for over a decade before joining MIT.</p> <p>“I loved my job there, I loved working in the U.C. system, and I had excellent colleagues,” says Parks. Still, she adds, she was fascinated by the opportunities MIT offers, including its abundant interdisciplinary projects that pull together researchers from multiple fields.</p> <p>“MIT seems to really value those kinds of relationships,” Parks says.</p> <p>In the classroom, Parks teaches an undergraduate course on current debates in media, which grapples with topics ranging from surveillance to net neutrality and media conglomerations. For graduate students, she has been teaching a foundational media theory course.&nbsp;</p> <p>“If you’re an MIT student and you want to come out of this place having thought about some of the policy implications relating to the media in this current environment, our classes equip you to think historically and critically about media issues,” Parks says.</p> <p><strong>Technology … and justice for all</strong></p> <p>One other issue strongly motivates Parks’ scholarship: the idea that technology is unevenly distributed around the world, with important implications for inequality.</p> <p>“Most people in the world live in relatively disenfranchised or underprivileged conditions,” Parks says. “If we shift the question about designing technologies so they serve a broader array of people’s interests, and designs are interwoven with concerns about equity, justice, and other democratic principles, don’t those technologies start to look different?”</p> <p>To this end, MIT’s Global Media Technologies and Cultures Lab, under Parks’ direction, studies topics such as media infrastructure, to see how video is distributed in places such as rural Zambia. Parks’ research has also examined topics such as the video content accessible to Aboriginal Australians, who, starting in the 1980s, attempted to gain greater control of, and autonomy over, the satellite television programming in rural Australia.</p> <p>Parks’ research takes place in a variety of social and economic orbits: In March, you could have found her and a research assistant, Matt Graydon, at the Satellite 2018 convention in Washington, interviewing CEOs and industry leaders for a new study of satellite-based internet services.</p> <p>In some places around the globe, the effects of aerial technology are more immediate. In the volume on drones, Parks writes that these tools create a “vertical mediation” between ground and sky — that when “drones are operating in an area over time, above a certain region, they change the status of sites and motions on the ground.” She elaborates on this in her new book, out this year, “Rethinking Media Coverage: Vertical Mediation and the War on Terror.”</p> <p>As diverse as these topics may seem at first, Parks’ scholarly output is intended to expore more deeply the connection between aerial and orbital technologies and life on the ground, even if it is not on the mental radar for most of us.&nbsp;</p> <p>“We need to be studying these objects in orbit above, and think about orbital real estate as something that’s relevant to life on Earth,” Parks says.</p> “I really wanted people to think of the satellite not only as this technology that’s floating around out there in orbit, but as a machine that plays a structuring role in our everyday lives,” says Lisa Parks, a professor in MIT’s Comparative Media Studies/Writing program.Photo: Jake BelcherFaculty, Profile, Satellites, Humanities, History of science, Comparative Media Studies/Writing, Technology and society, Autonomous vehicles, School of Humanities Arts and Social Sciences 3 Questions: Melissa Nobles and Craig Steven Wilder on the MIT and Legacy of Slavery project MIT Community Dialogue series is underway as multi-year research continues. Mon, 30 Apr 2018 13:40:01 -0400 School of Humanities, Arts, and Social Sciences <p><em>The first class of the "MIT and Slavery" undergraduate research project ran in the fall of 2017. Set in motion by MIT President L. Rafael Reif with Melissa Nobles, the Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences, the course was developed and taught by Craig Steven Wilder — the Barton L. Weller Professor of History and the nation’s leading expert on the links between universities and slavery — in collaboration with Nora Murphy, the MIT archivist for Researcher Services.</em></p> <p><em>The findings from the initial class include insights about MIT's role in the post-Civil War era of Reconstruction; examples of racism in the culture of the early campus; and the fact that MIT’s founder, William Barton Rogers, had six enslaved people in his Virginia household, before he moved to Massachusetts in 1853. The findings also suggest new lines of research that will enable MIT to contribute to a larger national conversation about still hidden legacies of slavery, especially the relationship between the Atlantic slave economies, the fields of science and engineering, and U.S. technical institutions.</em></p> <p><em>As the "MIT and Slavery" research continues over the coming semesters, MIT is also conducting a community dialogue series, MIT and the Legacy of Slavery, led by Dean Melissa Nobles. The dialogues are an opening chapter in MIT's commitment to researching this history and making it public. A series of events will create campus-wide and community-wide opportunities for shared discussions of the findings and our responses. The first event in this series was held in February, and the second, <a href="" target="_blank">The Task of History, takes place Thursday, May 3</a>, 5-7 p.m.<br /> <br /> SHASS Communications spoke with Nobles and Wilder to hear their thoughts about the ongoing research project and the community dialogue series.&nbsp;</em></p> <p><strong>Q:</strong> MIT’s approach to exploring the Institute’s historical relationship to slavery is unfolding somewhat differently than the process at other universities. Can you describe MIT’s approach, and what it means for the community and the Institute's responses to the research findings?</p> <p><strong>Wilder:</strong> Our undergraduate students are engaged in an ongoing research project examining MIT’s ties to slavery. As I like to note, MIT students are rewriting the history of MIT for MIT. Their focus on the early history of the Institute allows us to explore the connections between engineering, science, and slavery in antebellum America, which will make a significant and new contribution to the work being done by the dozens of universities that are now researching their historical ties to slavery. MIT is uniquely positioned to lead the research on this subject.</p> <p><strong>Nobles: </strong>It has been 15 years since Brown University launched its three year study of the university’s historical connections to slavery. Since then, several other colleges and universities, including Georgetown, Harvard, and Yale, have taken up similar multi-year studies. Three key features distinguish our project from these earlier efforts — to which we are indebted for the precedents they provide.</p> <p>The first is that rather than the research project starting unofficially and at the faculty level, in this case President Reif and I initiated the process, consulting with MIT historian Craig Steven Wilder about the best way to respond to inquiries about MIT’s connections to slavery. Neither the president nor I knew the answers to those questions. But we did appreciate our great good fortune in being able to turn to Craig, the nationally recognized expert on the relationship of slavery and American higher education and the author of "Ebony and Ivy: Race, Slavery, and the Troubled History of America's Universities." Craig recommended an innovative approach, which he then developed with Archivist Nora Murphy: a new, ongoing MIT undergraduate research class to explore this aspect of MIT's story. President Reif and I provide resources and support.</p> <p>The second distinctive quality, which flows from the first, has to do with timing. The norm at other universities is that some years of research predate the public release of the findings. By contrast, MIT announced the initial findings only a few months into the project and will continue releasing new findings each term. This means that the MIT community as a whole has the opportunity to be involved in this endeavor in real-time, as the research matures, learning from the emerging findings — and making informed suggestions for potential official Institute responses. We do not know what the research will find in full, nor what it will ask of us, and I envision a fluid process, one that can respond to new findings, as our community and leadership take the measure of this new dimension of MIT history.</p> <p>The third distinctive aspect is our project’s intellectual scope, which — by virtue of MIT’s expertise in science and technology — also allows us to explore a more far-reaching question: the connections between the development of scientific and technological knowledge and the institution of slavery and its legacies. The Institute’s founding at the start of the Civil War in 1861 involves MIT in one of the earliest such legacies: the reconstruction of America’s southern states, and new social, legal, and economic realities that arose in the transition from slave to free labor, some of which we continue to grapple with today.<br /> &nbsp;<br /> <strong>Q:</strong> At President Reif’s request, Dean Nobles is leading a series of community dialogues about the early findings from the "MIT and Slavery" class. What plans are there for this phase, and what do you hope the dialogues will produce?</p> <p><strong>Wilder: </strong>The community dialogues are an effort to bring the early and ongoing research from the "MIT and Slavery" course to the various constituencies on campus, to our alumni, and to people and institutions in the Cambridge-Boston area. Our history can help us make new and lasting connections to communities that neighbor MIT but remain separate from it. Dean Nobles is planning an exceptionally rich and inviting range of events and activities to anchor these community exchanges. The forums will provide opportunities for us to receive feedback on the project and to solicit opinions on how MIT can respond to this history as the research continues to unfold.</p> <p><strong>Nobles: </strong>I envision the community dialogues as fulfilling two purposes. The first, and most important, is to engage and deepen our collective understanding of the history and issues surrounding MIT, slavery, and Reconstruction, which was itself the immediate legacy of slavery. The second is to provide various ways by which the MIT community can engage with the ideas and questions raised by the research.<br /> <br /> We will shape the dialogues to reflect and advance these two purposes. We will also organize activities, such as small group gatherings, film screenings, panel discussions, and other creative projects designed to encourage and catalyze conversation and reflection. We envision a number of activities each semester. One hope is that the dialogues will inspire MIT community members to incorporate the research findings, and the questions they raise, into their own thinking, teaching, and endeavors.</p> <p>For example, during our February event, at which the first group of student-researchers announced their early findings, Alaisha Alexander '18 summoned the audience to a creative investigation. She asked that we all go back to our labs, libraries, and classrooms, and be newly alert for ways in which larger social issues, and specifically, racial issues, may be embedded or reflected in our fields. This strikes me as an extremely important question, one worth asking precisely because now, as in the past, larger social, political, and economic processes are inextricably connected to technological and scientific advances. Examining MIT’s history and its connection to slavery allows us to think in new ways — about our past but also about the present and future.<br /> <br /> And, of course, as the research and the dialogue series progress, we will always be interested in hearing from the MIT community. In addition to responses via emails and participation in scheduled events, we will set up a mechanism so that community members can contribute comments, ideas, suggestions, and insights.</p> <p><strong>Q: </strong>Alongside the MIT and Slavery project, Professor Wilder and others are engaged in creating a consortium of technical universities that will research broader questions of the relationship of the sci/tech fields to the institution of slavery and the U.S. slave economy. Do you envision ways that MIT faculty, students, and staff can participate in this broader research effort?<br /> &nbsp;<br /> <strong>Wilder:</strong> The goal of the consortium is to bring several antebellum and Civil War-era engineering and science schools together to produce a more complete history of the rise of these fields in the Atlantic slave economy. The current plan is to have each school establish a research project that draws on its strengths and reflects its institutional needs. The consortium will help coordinate efforts and move resources between universities, and it will host regular conferences where participating faculty, archivists, librarians, and students can share their research.<br /> <br /> <strong>Nobles:</strong> I am really looking forward to this multi-university research project because it will shine a bright light on long understudied dimensions of the historiography of slavery and of science and technology. For example, in most American history classes, we learn that the introduction of the mechanical cotton gin in the early 1800s exponentially transformed the productivity and hence profitability of cotton cultivation. This technological “advance” for productivity also meant, of course, an intensified need for slave labor, to grow and harvest ever-increasing amounts of cotton. Undoubtedly, the connections between science and technology with slavery go far deeper and wider than the cotton gin. The entanglement of the slave economy, science, and technology is a very rich topic area, and one that MIT is uniquely qualified to examine.</p> “I believe the work of this class is important to the present — and to the future," says MIT President L. Rafael Reif. "What can history teach us now, as we work to invent the future? How can we make sure that the technologies we invent will contribute to making a better world for all?" Pictured: Craig Steven Wilder, the Barton L. Weller Professor of History, and Melissa Nobles, professor of political science and the Kenan Sahin Dean of the MIT School of Humanities, Arts, and Social Sciences.Photos: Richard Howard (Wilder) and Justin Knight (Nobles)History of MIT, Slavery, Diversity and inclusion, Faculty, 3 Questions, History, History of science, MIT Libraries, President L. Rafael Reif, SHASS, Staff, Students, School of Humanities Arts and Social Sciences Featured video: MIT&#039;s meteorology pioneers Department of Earth, Atmospheric and Planetary Sciences honors trailblazing professors Jule Charney and Edward Lorenz with a tribute to their lives and legacies. Tue, 17 Apr 2018 12:50:12 -0400 MIT News Office <div class="cms-placeholder-content-video"></div> <p>Born 100 years ago, two extraordinary pioneers of meteorology forever changed our understanding of the atmosphere and its patterns: MIT professors Jule Charney and Edward Lorenz. Beginning in the late 1940s, Charney developed a system of equations capturing important aspects of the atmosphere’s circulation, enabling him to pioneer numerical weather prediction, which we use today. A decade later, Lorenz observed that atmospheric circulation simulations with slightly different initial conditions produced rapidly diverging numerical solutions. This discovery led him to propose that atmospheric dynamics exhibit chaotic behavior, an idea that has since been popularized as “the butterfly effect” and has changed the way we understand the weather and climate.</p> <p>As MIT professors and department heads, these individuals contributed numerous insights to the field as well as profoundly influenced the next generation of leaders in atmospheric, oceanographic and climate sciences. During their time, Jule Charney and Edward Lorenz left an indelible mark on the field of meteorology, and their legacy lives on within MIT’s Department of Earth, Atmospheric and Planetary Sciences.</p> <p><em>Submitted by: EAPS | Video by: Meg Rosenburg | 15 min, 3 sec</em></p> MIT professors Jule Charney (left) and Edward LorenzImage: Meg RosenburgFeatured video, Weather, Climate, EAPS, School of Science, History of science, History of MIT Inaugural class of MIT-GSK Gertrude B. Elion Research Fellows selected New fellowship program honoring trailblazing Nobel laureate awards four MIT postdocs focused on drug discovery and development. Tue, 17 Apr 2018 12:00:27 -0400 Erin Edwards | Institute for Medical Engineering and Science <p>Jay Mahat from the Sharp Lab at the Koch Institute for Integrative Cancer Research, Benjamin Mead from the Shalek Lab at the Institute for Medical Engineering and Science, Nicholas Struntz from the Koehler Lab at the Koch Institute, and Sarvesh Varma from the Biological Microtechnology and BioMEMS Group at the Research Laboratory of Electronics have been awarded two-year postdoctoral fellowships through the MIT-GSK Gertrude B. Elion Research Fellowship Program for Drug Discovery and Disease.</p> <p>The fellowship program is a new initiative between MIT and GlaxoSmithKline (GSK) that aims to promote basic research while introducing young scientists to key aspects of pharmaceutical research and development. It honors <a href="" target="_blank">Gertrude Belle Elion</a> (1918-1999), an early leader in the field of chemotherapeutic agents who worked for many years at Burroughs Wellcome, which became Glaxo Wellcome in 1995 and GlaxoSmithKline in 2000. Although Elion never finished a PhD due to her need to work full-time, she eventually received at least 25 honorary doctorate degrees and numerous awards in recognition of her scientific achievements. In 1988, she shared the Nobel Prize in physiology or medicine for the discoveries of important principles for drug treatment in developing compounds to treat conditions such as leukemia, viral and bacterial infections, malaria, and gout. In 1991, she was awarded the National Medal of Science and was the first woman inducted into the National Inventors Hall of Fame, and in 1997, she was awarded the Lemelson-MIT Lifetime Achievement Award for her groundbreaking work in developing therapies for cancer and leukemia.</p> <p>The Gertrude B. Elion Research Fellows are basic or applied scientists and engineers at MIT who are interested in innovative technology and/or platforms that can enable transformative advances in drug discovery. They will receive funding for salary and benefits, lab supplies, and indirect costs for two years to conduct research in the laboratory of a principal investigator at MIT, and they will have ancillary mentorship from a GSK mentor. A critical component of the program will be ongoing communication and exchange of information amongst the fellow, MIT principal investigator, and GSK mentor.</p> <p>The next call for applications for the MIT-GSK Gertrude B. Elion Research Fellowship Program for Drug Discovery and Disease will occur in 2019.</p> Left to right: Jay Mahat, Nicholas Struntz, Benjamin Mead, and Sarvesh Varma are the first recipients of the MIT-GSK Gertrude B. Elion Research Fellowship Program for Drug Discovery and Disease.Photo: Riley DrakeAwards, honors and fellowships, Graduate, postdoctoral, Women in STEM, History of science, Drug discovery, Drug development, Pharmaceuticals, Collaboration, Industry, Health sciences and technology, Koch Institute, Research Laboratory of Electronics, School of Engineering, Institute for Medical Engineering and Science (IMES), School of Science Celebrating 50 years of transatlantic geodetic radio science Researchers mark a silver anniversary for a space geodesy technique pioneered at MIT. Thu, 05 Apr 2018 14:15:01 -0400 MIT Haystack Observatory <p>Fifty years ago this week, in April 1968, a historic event took place involving MIT Haystack Observatory radio telescope in Westford, Massachusetts, and its counterpart at <a href="" target="_blank">Onsala Space Observatory</a> in Onsala, Sweden: the first transatlantic geodetic very long baseline interferometry (VLBI) observations.</p> <p><a href="" target="_blank">Geodesy</a> is the science of measuring and analyzing the Earth's shape, orientation in space, and gravity field, which enables the detection of geophysical signals such as global sea level rise, polar ice loss, and plate tectonic motions with unparalleled accuracy. Prominent space geodetic techniques include GPS and VLBI. Astronomical VLBI involves using multiple radio antennas spaced across the globe to image distant objects such as black holes (as in the <a href="" target="_blank">Event Horizon Telescope</a> black hole project); geodetic VLBI to monitor changes on our planet by measuring the telescopes position over time; both rely on the time difference between when antennas receive signals from extragalactic radio sources.</p> <div class="cms-placeholder-content-video"></div> <p>This occasion marks an important anniversary in geodesy; although the April observations were not entirely successful in terms of usable data, as VLBI was in its very earliest days, it was the first time that geodetic VLBI was performed across the Atlantic. Successful VLBI work was first completed in 1967 between several groups, including Haystack and the Green Bank Telescope in West Virginia — one of several collaborations honored with the American Academy of Arts and Sciences 1971 Rumford Award.</p> <p>Today, <a href="">NASA's Space Geodesy Project</a> (SGP) operates a worldwide system of modern geodetic sites, including the broadband <a href="">VGOS (VLBI Global Observing System) network</a>, in collaboration with international partners around the globe.</p> <p>As part of current innovative VGOS development efforts, MIT Haystack Observatory and Onsala Space Observatory will be making regularly scheduled observations this week that happen to align with the historic events of April 1968. The two observatories are celebrating the occasion on Thursday, April 5, to honor the scientists and engineers in the U.S. and Sweden who made such achievements possible and helped launch decades of successful VLBI experiments worldwide.</p> <p>Several scientists and engineers who contributed to the early VLBI development are still working today, including Alan Whitney of MIT Haystack Observatory and Jim Moran of the Harvard-Smithsonian Center for Astrophysics. Whitney describes the confidence of the early VLBI pioneers:</p> <p>“There was really no doubt that the concept [of VLBI] was sound, and it was a matter of implementing it properly. And there was a lot of confidence that it could be done, but with the realization that it would take quite a bit of effort and probably several tries before we got it right. I always felt optimistic about the potential for VLBI. There's still a lot more to do in terms of improving the techniques; we're going to learn a lot more as the VGOS system is deployed worldwide.”</p> <p>Scientists developing the pioneering VLBI techniques in the late 1960s remember it as an exciting period in their careers, including such details as the difficulties of recording data on numerous half-inch magnetic tapes, which held 16 megabytes on a 12-inch reel, lasted for about three minutes of observations, and presented many technical challenges. An experiment might cover hundreds or thousands of tape reels. Technology for correlation and data recording has progressed immensely since 1968 — the current equivalent recorder, called the Mark 6, holds up to 32&nbsp;terabytes and would fill the tape used in the 1960s in about 8 milliseconds — and shows no signs of slowing over the next decades. As Jim Moran says, "There is a press for increased bandwidth: There are tremendous possibilities for pushing the boundaries of VLBI. It is still a very rich field. It's amazing that we're continuously observing in VLBI 50 years later."</p> <p>Observatories in NASA SGP's VGOS network are looking forward with particular enthusiasm to this week's observations.</p> MIT Haystack Observatory in Westford, Massachusetts (pictured), and the Onsala Space Observatory in Onsala, Sweden are celebrating 50 years since the first transatlantic geodetic very long baseline interferometry (VLBI) observations.Photo: MIT Haystack ObservatoryHaystack Observatory, Space, astronomy and planetary science, History of science, NASA Professor Tom Leighton wins 2018 Marconi Prize Akamai co-founder honored for pioneering content delivery network services industry. Thu, 22 Mar 2018 16:55:01 -0400 Sandi Miller | Department of Mathematics <p>MIT professor of mathematics Tom Leighton has been selected to receive the 2018 Marconi Prize. The Marconi Society, dedicated to furthering scientific achievements in communications and the Internet, is honoring Leighton for his fundamental contributions to technology and the establishment of the content delivery network (CDN) industry.</p> <p>Leighton ’81, a professor in the Department of Mathematics and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), will be awarded at The Marconi Society’s annual awards dinner in Bologna, Italy, on Oct. 2.</p> <p>“Being recognized by the Marconi Society is an incredible honor,” said Leighton. “It’s an honor not just for me, but also for Danny Lewin, who created this company with me, and for all of the people at Akamai who have worked so hard for over two decades to make this technology real so that the internet can scale to be a secure and affordable platform where entertainment, business, and life are enabled to reach unimagined potential.</p> <p>Leighton developed the algorithms now used to deliver trillions of content requests over the internet every day. Akamai, the world’s largest cloud delivery platform, routes and replicates content over a gigantic network of distributed servers, using algorithms to find and utilize servers closest to the end user, thereby avoiding congestion within the internet.</p> <p>“Tom’s work at MIT and with Akamai has had a groundbreaking impact in making the world a more connected place," says Professor Daniela Rus, director of CSAIL. "His insights on web content delivery have played a key role in enabling us to share information and media online, and all of us at CSAIL are so very proud of him for this honor."</p> <p>“What is amazing about Tom is that, throughout his career, he is and has been as comfortable and talented as a researcher designing clever and efficient algorithms, as an educator teaching and mentoring our undergraduate and graduate students, as an entrepreneur turning mathematical and algorithmic ideas into a rapidly-expanding startup, and as an executive and industry leader able to weather the storm in the most difficult times and bring Akamai to a highly successful company,” says Michel Goemans, interim head of the mathematics department.</p> <p>Leighton has said that Akamai’s role within the internet revolution was to end the “World Wide Wait.” World Wide Web founder and 2002 Marconi Fellow <a href="" target="_blank">Tim Berners-Lee</a>, who was the 3Com Founders chair at MIT’s Laboratory for Computer Science (LCS), foresaw an internet congestion issue and in 1995 challenged his MIT colleagues to invent a better way to deliver content. Leighton set out with one of his brightest students, Danny Lewin, to solve this challenge using distributed computing algorithms.</p> <p>After two years of research, Leighton and Lewin discovered a solution — but then faced the challenge of convincing others that it would work. In 1997, they entered the $50K Entrepreneurship Competition run by the MIT Sloan School of Management.</p> <p>“We literally went to the library and got the equivalent of 'Business Plans for Dummies' because, as theoretical mathematicians, we had no experience in business,” Leighton remembers. But they learned quickly from those who did, including business professionals they met through the $50K Competition.</p> <p>At the time, Leighton and Lewin didn’t envision building their own company around the technology. Instead, they planned to license it to service providers. However, they found that carriers needed to be convinced that the technology would work at scale before they were interested. “Akamai was state-of-the-art in theory, meaning that it was well beyond where people were in practice. I think folks were very skeptical that it would work,” says Leighton.</p> <p>While carriers were ambivalent, content providers were receptive: The internet had proven vulnerable to congestion that was crashing websites during high demand periods. So Leighton and Lewin decided to build their own content delivery network and provide content delivery as a service. Although their business plan did not win the $50K contest, it attracted enough venture capital investment to get a company started, and Leighton and Lewin incorporated Akamai in 1998.</p> <p>Akamai’s first big opportunity came in 1999 with the U.S. collegiate basketball tournament known as “March Madness.” With 64 teams playing basketball during the course of a few days, millions of viewers were watching their favorite teams online, mostly from work. When ESPN and their hosting company Infoseek became overloaded with traffic, they asked if Akamai could handle 2,000 content requests per second.</p> <p>Leighton and his team said yes — even though up to that point they had only been delivering one request every few minutes. “We were a startup and we believed,” said Leighton. Akamai was able to handle 3,000 requests per second, helping ESPN to get back on line and run six times faster than they would on a normal traffic day.</p> <p>Akamai’s technology and viability were proven; the company went public in 1999, earning millions for several of its young employees. But when the tech bubble burst the next year, Akamai’s stock plummeted and the firm faced the prospect of retrenchment. Then, on September 11, 2001, <a href="">Danny Lewin</a> was killed aboard American Airlines Flight 11 in the terrorist attack on the Twin Towers. Akamai employees had to set aside their personal grief and complete emergency integrations to restore client sites that had crashed in the overwhelming online traffic created that day.</p> <p>Akamai rebounded from that dark period, and over the years evolved from static image content to handle dynamic content and real-time applications like streaming video. Today, Akamai has over 240,000 servers in over 130 countries and within more than 1,700 networks around the world, handling about 20 to 30 percent of the traffic on the internet. Akamai accelerates trillions of internet requests each day, protects web and mobile assets from targeted application and DDoS attacks, and enables internet users to have a seamless and secure experience across different device types and network conditions. They created new technology for leveraging machine learning to analyze real-user behavior to continuously optimize a website’s performance, as well as algorithms that differentiate between human users and bots. Akamai’s security business surpassed half a billion dollars per year in revenue, making it the fastest growing part of Akamai’s business.</p> <p>“Dr. Leighton is the embodiment of what the Marconi Prize honors,” says Vint Cerf, Marconi Society chair&nbsp;and chief internet evangelist at Google. “He and his research partner, Danny Lewin, tackled one of the major problems limiting the power of the internet, and when they developed the solution, they founded Akamai — now one of the premier technology companies in the world — to bring it to market. This story is truly remarkable.”</p> <p>By receiving the Marconi Prize, Leighton joins a <a href="">distinguished list of scientists</a> whose work underlies all of modern communication technology, from the microprocessor to the internet, and from optical fiber to the latest wireless breakthroughs. Other <a href="">Marconi Fellows</a> include 2007 winner Ron Rivest, an Institute Professor, a member of CSAIL and the lab's <a href="">Theory of Computation Group</a>, and a founder of its <a href="">Cryptography and Information Security Group</a>; and LIDS adjunct Dave Forney, ScD (EE) ’65, who received it in 1997. &nbsp;</p> <p>In 2016, the MIT Graduate School Council awarded Leighton, jointly with Dean of Science Michael Sipser, the Irwin Sizer Award, for most significant improvements to MIT education, specifically for their development of the successful 18C major: Mathematics with Computer Science. Leighton was also inducted into the National Inventors Hall of Fame in 2017 for Content Delivery Network methods; Danny Lewin was also inducted posthumously.</p> <p>Leighton said he plans to donate the $100,000 Marconi Prize to <a href="">The Akamai Foundation</a>, with the goal of promoting the pursuit of excellence in mathematics in grades K-12 to encourage the next generation of technology innovators.</p> MIT Professor Tom Leighton has won the 2018 Marconi Prize for his fundamental contributions to technology and the establishment of the content delivery network (CDN) industry.Photo courtesy of the Department of Mathematics.Awards, honors and fellowships, Faculty, Internet, Mathematics, Technology and society, History of science, School of Science, Computer Science and Artificial Intelligence Laboratory (CSAIL), Algorithms, Computer science and technology, Industry, Alumnai/ae, School of Engineering Event explores initial findings from “MIT and Slavery” class Students bring the Institute into national conversation about universities and the institution of slavery in the United States. Fri, 23 Feb 2018 15:30:00 -0500 School of Humanities, Arts, and Social Sciences <p>In 1882, MIT students socialized in a drawing room that featured a replica of J.M.W. Turner’s painting, “The Slaveship,” which shows enslaved people drowning, thrown overboard during a storm as expendable cargo. The students’ commentary centered on the painting’s bold colors, but ignored the violent human narrative.</p> <p>On Friday, Feb. 16, MIT senior Alaisha Alexander stood under a projection of that haunting image, and noted that absence in the campus dialogue of the time. Early MIT coursework also referred to scientific literature that validated slavery, she said, without encountering opposition from professors or students. “It’s not just about what is taught at a university. It’s also about what isn’t,” said Alexander, a mechanical engineering student. “Science and technology aren’t neutral.”</p> <p>Alexander and other MIT students have begun exploring the university’s entanglement with the institution of slavery, in the process writing a more complete history, and helping to catalyze a national conversation about the legacies of slavery in science, engineering, and technical education. The source of this momentum is a new, ongoing undergraduate research course, “MIT and Slavery,” (21H.S01). <a href="" target="_blank">Set in motion by MIT President L. Rafael Reif</a> with School of Humanities, Arts, and Social Sciences (SHASS) Dean Melissa Nobles, the course was developed and taught by Craig Steven Wilder, the Barton L. Weller Professor of History and the nation’s leading expert on the links between universities and slavery, in collaboration with Nora Murphy, the MIT Archivist for Researcher Services.</p> <div class="cms-placeholder-content-video"></div> <p><strong>How can history help us invent a better future?</strong></p> <p>The power of stories and seeking the facts were primary threads of discussion among the nine speakers during Friday’s event, the first of the “MIT and the Legacy of Slavery” dialogues that will engage the MIT community in considering responses to the course findings. A single MIT course rarely prompts community-wide conversations, but the research of the “MIT and Slavery” course speaks not only to more complete understanding of the Institute’s own history, but to the roots of ongoing culture-wide issues of justice, inclusion, and human rights.</p> <p>“I believe the work of this class is important to the present — and to the future,” President Reif said in his welcoming remarks to around 200 faculty, students, alumni, and a livestream audience at the event. “Something I have always loved about the MIT community is that we seek, and we face, facts. What can history teach us now, as we work to invent the future? How can we make sure that the technologies we invent will indeed contribute to making a better world for all?”</p> <p><strong>The power of facts — and stories</strong></p> <p>Four MIT students from the first class presented well-researched information and narratives — previously obscured, forgotten, ignored — that shed new light on the history of science and technology in the U.S. One of many revelations unearthed in the course involves the story of MIT’s founder and first president William Barton Rogers. As Murphy discovered in the U.S. Census Slave Schedule of Virginia, before Rogers moved to Massachusetts in 1853, he owned six enslaved people, who, according to the census records, lived in his Virginia household.</p> <p>This discovery hardly surprises scholars such as Wilder. In his words, “If we're surprised, our surprise is a measure of how successful we’ve been as a nation at erasing the history of slavery,” including its pervasive links with the economy and major institutions, in the Northeast as well as the South. Many U.S. engineering schools, for example, were originally funded by families whose wealth derived from textile, sugar, and mining operations, which depended, directly or indirectly, on the labor of enslaved people.</p> <p><strong>A new space for research and conversation</strong><br /> <br /> All the early findings from the new course, and those from future classes, will contribute to advancing a national dialogue, Wilder said: “We are&nbsp;not only&nbsp;participating&nbsp;in a larger exploration of the ties between American universities and slavery, we are leading a part of it.” Wilder said he hopes the MIT project inspires other science and technology institutions across the country to revisit their histories, and to form a collaborative research effort on the relationship between science, engineering, and the slave economies of the Atlantic World. Wilder is partnering with colleagues at New York University to convene several schools this spring to launch the initiative.</p> <p>“The goal of our work is to collectively tell our story in the most honest, complicated, full, and transparent way that we can,” Wilder said during Friday’s event. Such a narrative will create space for much better conversations on campuses, in cities, in states, and across the country, he explained, adding that “what we mean by race, social justice, inclusion, and diversity” for the present and the future can only be understood when seen against an accurate historical backdrop.</p> <p><strong>Fundamental to the nation's history</strong></p> <p>In 1861, when MIT was founded, the political and social order in the U.S., along with its economy, was still fundamentally shaped by the institution of slavery, said Nobles, who provided an overview of the cultural and economic context in which MIT was founded, and will lead MIT’s process of community discussions to consider responses to the “MIT and Slavery” course findings.</p> <p>The legacy of slavery is enmeshed in the histories of many of the country’s oldest and most prestigious institutions, said Nobles, who is also a professor of political science at MIT. “Slavery was so fundamental to our country’s history, economy, and politics that it would only be surprising if there were no connections at MIT.”</p> <p>Indeed all scientific knowledge is embedded in a social context, said the course’s teaching assistant Clare Kim, a fifth-year PhD candidate. Her students visited the MIT archives and pored over old issues of the student newspaper <em>The Tech</em> and the MIT yearbook <em>Technique</em>. They also read faculty minutes, course catalogs, and a wealth of secondary source materials.</p> <p>“These students interrogated not only our assumptions about MIT and slavery — but also race, science, and technology,” Kim said. She urged the audience to do more than passively receive the facts the class has found. “Go back to your labs and offices and look at your environment. Consider how the way you think about MIT — and science and technology — includes traces of the histories you are about to hear today.”</p> <p><strong>Insights from MIT students</strong></p> <p>Gasps were audible as Alexander, the mechanical engineering student, delved into early MIT silence around “The Slaveship” painting and other racialized art and literature. She ended her presentation by saying, “I encourage you to think about where different notions of science come from.”</p> <p>Visual images were also the focus of first-year student Kelvin Green II’s research. Combing through early MIT student publications, Green II strove to understand early campus attitudes through the images that MIT students drew. He found racialized and mocking images of African-Americans; hooded figures evocative of the Klu Klux Klan; and an absence of images&nbsp;depicting&nbsp;African-Americans as students or engineers — an absence at odds with the actual occupations of black male Bostonians during the 1881-1911 time period.</p> <p>When asked about the impact of these slavery-related findings on black students at MIT today, Green II reflected: “How do you quantify the experience of a black student confronted with the images I’ve put up?” Understanding racism, he continued, requires qualitative analysis, including listening to the stories of those most affected by it. “Engage in dialogue. If you don’t have a black friend, make a black friend!” he said to applause.</p> <p>Sophomore Mahalaxmi Elango dug into MIT’s early curriculum for her project, and discovered not only an early focus on mining — an industry that had relied heavily on enslaved people — but also that slavery was a subject for academic discussion at MIT. A popular course in moral philosophy, for example, explored the relationship between technology and the economies of labor, including the labor of enslaved people. An 1873 political economy exam asked: “Define <em>Labor</em>, and prove that the service of slaves, or any involuntary work, is not labor in the economic sense.”<br /> <br /> Charlotte Minsky, a sophomore majoring in earth, atmospheric, and planetary sciences, examined the careers of students who came to MIT in its first 15 years and found a large concentration of these students went into the railroad industry. She speculated that this focus emerged from the need to rebuild the South after the Civil War. “It’s essential to the narrative of early MIT that there’s a flow of money and ideas from the South to the North in the era of Reconstruction,” she said. Of MIT’s investigation into slavery, Minsky observed, “MIT is setting a precedent for similar institutions. We are showing that connections to slavery are very nuanced, and that science and technology are an aspect of this history that can longer be left in the wings.”</p> <p><strong>Raising questions </strong></p> <p>What skepticism there is about the “MIT and Slavery” research course takes the form of questions like the following, posed by a livestream viewer: “What gives anyone today the right to judge the actions of people in the distant past by modern popular moral standards?”</p> <p>Wilder welcomed the opportunity to address that question. “Birth gives us the right,” he said, with a chuckle. “The idea that to judge the past by modern moral values is somehow ahistorical misunderstands what history is. History is the science of thinking about the past and how it influences the present.” The MIT community is capable of thinking about the past in constructive ways, he added. “One of the goals of the project is to create opportunities for us as a community — as communities — to wrestle with difficult issues in dialogue in a democratic and open way.”<br /> <br /> Another community member asked one of the questions the project raises for education: “What would you say the implications of MIT’s findings are for teaching science and the history of science?” As an initial response, Kim noted that the “MIT and Slavery” course will itself be one example, continuing to research and share discoveries about the relationship between science, technology, and the social realities of which they are a part. She added, “We are asking people to think differently.”</p> <p><strong>Looking ahead</strong></p> <p>The value of this ongoing exploration is immeasurable, President Reif said. “If we have the courage to look at even the troubling parts of our history,” he said, “I believe we have a much better chance of approaching the present and the future with humility and self-awareness.”</p> <p>The MIT and Legacy of Slavery dialogue will continue at MIT, led by Nobles who will announce plans for new opportunities to contribute ideas and reflections later this spring. The process Nobles envisions will be one of “looking at old things with new eyes.” In the meantime, and in parallel with the Institute-wide conversation, updates and information on the “MIT and Slavery” course findings will be posted to the <a href="" target="_blank">course website</a>.</p> <p><em>Story prepared by SHASS Communications<br /> Editorial team: Meg Murphy and Emily Hiestand</em></p> Four MIT students from the first "MIT and Slavery" class (21H.S01) presented well-researched information and narratives that shed new light on the history of science and technology in the U.S. (Left to right) Charlotte Minsky '20, Kelvin Green II '21, Mahi Elango '20, teaching assistant and PhD student Clare Kim, and, at the podium, Alaisha Alexander '18. Screenshot from MVP webcastHistory, History of MIT, Students, Faculty, Staff, President L. Rafael Reif, Diversity and inclusion, History of science, Slavery, MIT Libraries, SHASS, School of Humanities Arts and Social Sciences MIT Black History Project launches new website Digital archive features never-before-published image of MIT&#039;s first black woman student. Wed, 21 Feb 2018 16:15:01 -0500 School of Architecture and Planning <p>The MIT Black History Project has launched a&nbsp;<a href="" target="_blank">new website</a>&nbsp;that documents evidence of the role and experience of the black community at MIT since the Institute opened its doors in 1865.<br /> <br /> “Look at this project to get a better sense of what happens when you ignore human potential ... based on appearance, which has been much of our country's history,” says Melissa Nobles, professor of political science and Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences (SHASS) at MIT.<br /> <br /> The MIT Black History Project was founded and is directed by Clarence G. Williams, adjunct professor emeritus and former special assistant to the president. The project is an ongoing collaborative research effort sponsored by the MIT Office of the Provost.<br /> <br /> Since 1995, the project has worked to archive over 150 years of the black experience at MIT and identified six key historical periods along the way: Roots and Exponents (1861-1920), Order of Operations (1921-1945), Potential Output (1946-1954), Critical Mass (1955-1968), Integration and Differentiation (1969-1994), and Rising Voices (1995-present).<br /> <br /> At present, the website offers more than 500 illustrations, photographs, and other archival material available for community members, scholars, journalists, and other interested individuals to search. An additional 2,500 items already collected by the project will be included in the future. A major call to action is on the site’s&nbsp;<a href="" target="_blank">Giving</a>&nbsp;page, where people are invited to share their own pieces of MIT black history.<br /> <br /> Williams’ objective has been to place the black experience at MIT in its full and appropriate context by researching and disseminating materials that expose communities both inside and outside MIT to this rich, historically significant legacy.<br /> <br /> This effort includes lending research support to other Institute-affiliated entities such as the&nbsp;<a href="" target="_blank">MIT and Slavery course</a>&nbsp;taught by historian Craig Steven Wilder and archivist Nora Murphy, the MIT&nbsp;<a href="" target="_blank">MLK Visiting Professor and Scholars Program website</a>, and various <a href="">Black Alumni/ae of MIT</a> (BAMIT) endeavors.<br /> <br /> Via the materials disseminated by the MIT Black History Project, Williams hopes that “future generations may relate to our hopes and disappointments [at MIT], to our struggles and achievements.”<br /> <br /> Williams joined the MIT administration in 1972 as assistant dean of the graduate school. Throughout an MIT career spanning over three decades, he served as special assistant to the president and chancellor for minority affairs, acting director of the Office of Minority Education, assistant equal opportunity officer, ombudsperson, and adjunct professor in the Department of Urban Studies and Planning.<br /> <br /> Williams is the author of "Reflections of the Dream, 1975-1994: Twenty-One Years Celebrating the Life of Dr. Martin Luther King, Jr. at the Massachusetts Institute of Technology" (MIT Press, 1996) and "Technology and the Dream: Reflections on the Black Experience at MIT, 1941-1999" (MIT Press, 2003).<br /> <br /> <strong>A valuable new resource</strong><br /> <br /> Website content includes the first visual evidence of the MIT black experience, dating back to 1875, at the early campus in downtown Boston:&nbsp;<a href="" target="_blank">Jones' Lunch</a>&nbsp;is a small cafeteria owned and operated out of a gymnasium by a black caterer named Jones.<br /> <br /> By 1892, the Institute sees its first black graduate,&nbsp;<a href="" target="_blank">Robert R. Taylor</a>. He goes on to become the first known African-American architect to be accredited in the United States. Taylor also designs most of the pre-1930s buildings at Tuskegee Institute, which models its own curriculum after MIT’s.<br /> <br /> The first black woman student to attend MIT is&nbsp;<a href="" target="_blank">Marie C. Turner</a>. She enrolls in 1905, along with her brother Henry Charles Turner, Jr. They are the second case of black siblings to attend MIT since Charles S. Dixon, Class of 1898, and John B. Dixon, Class of 1899.<br /> <br /> It isn’t until 1955 that MIT hires its first black faculty member, linguist&nbsp;<a href="" target="_blank">Joseph R. Applegate</a> — the same year that both Applegate and Noam Chomsky earn their PhDs from the University of Pennsylvania.<br /> <br /> <strong>About the project team</strong><br /> <br /> The web content team is led by writer and MIT alumna Nelly Rosario and digital humanities producer Robert L. Dunbar. Serving as project consultants is a diverse group of current and retired MIT administrators and staff. Web development, content strategy, and design services were provided by MIT Information Systems and Technology (IS&amp;T), MIT Communications Initiatives, and the&nbsp;MacPhee Design Group, a web design and development firm from the Boston area.<br /> <br /> The project has received support from MIT President L. Rafael Reif, MIT Provost Martin A. Schmidt, and two previous presidents in memoriam, Charles M. Vest and Paul E. Gray. Alumni have also contributed to this effort, notably philanthropist Reginald Van Lee and RPI president Shirley Ann Jackson, both of whom serve as the project’s senior advisors.<br /> <br /> The MIT Black History Project website launch coincides with Black History Month and with the MIT Black Students’ Union 50th Anniversary year.</p> Image: MIT Black History ProjectHistory, History of MIT, Diversity and inclusion, Urban studies and planning, School of Architecture and Planning, Community, Students, Faculty, Staff, Administration, Provost, History of science, Slavery Energy transition through aesthetics and culture MIT Energy Initiative seminar examines the role of the humanities, design, and aesthetics in catalyzing a fairer, more diverse energy future. Tue, 24 Oct 2017 15:55:01 -0400 Francesca McCaffrey | MIT Energy Initiative <p>What does energy have to do with art, literature, and happiness? In a recent MIT Energy Initiative (MITEI) seminar, Imre Szeman, a professor of communication and culture at the University of Waterloo, addressed this question and engaged in a discussion of what he calls&nbsp;“petroculture” with MIT faculty of architecture and humanities, arts, and social sciences.</p> <p>Faculty host Rania Ghosn, an assistant professor of architecture and urbanism, heralded Szeman’s talk as a different kind of conversation about energy. She was “delighted,” she said, to host an event between MITEI and the School of Architecture and Planning that went “beyond the excellent scientific and engineering research to engage methods and insights from the humanities, aesthetics, and design in conversations on energy and energy transitions.”</p> <p><strong>An intentional energy transition</strong></p> <p>“We moderns still tend to take energy as a largely neutral aspect of social life,” Szeman said. “But the forms of energy we use, and how we use them, shape society through and through.”</p> <div class="cms-placeholder-content-video"></div> <p>Petroculture&nbsp;“is the name for a society that’s been organized around the energies and products of fossil fuels, the capacities it engenders and enables, and the situations and context it creates,” he said.&nbsp;“Our expectations, our sensibilities, our habits, our ways of being in and moving across the world, how we imagine ourselves in relation to nature, as well as in relation to each other, these have all been sculpted by and in relation to the massively expanded energies of the fossil fuel era.”</p> <p>For most of the 20th century, energy meant oil. Oil, cheap and portable, meant prosperity.</p> <p>“Desire was coordinated by and in relation to the consumption of fossil fuels,” Szeman said. Owning a car, a computer, a house of a certain size, having reliable electricity — all those aspirations became benchmarks of social progress.</p> <p>Of course, burning fossil fuels has ecological drawbacks, not least of which are its attendant climate-changing emissions. In addition, oil-fueled progress has not been universal.</p> <p>“The fossil economy left much of the world behind,” Szeman said. “You were rich because you happened to inhabit a part of the world that was rich in oil.” These concerns, as well as increasing calls for diversity and resiliency in the energy sector, are changing the face of the energy landscape. We are on the verge of what Szeman calls an “intentional transition.”</p> <p>The key to this transition, Szeman said, is to examine the set of values upon which we want society to be based. Changing the types of energy we consume, and how much, can bring about “tectonic” societal shifts, and he suggested a set of principles to guide the transition.</p> <p>First, agency and mobilization: People should take a more active role, he said, in directing changes in energy consumption, whether that means lobbying for policy changes or altering the form and amount of energy people consume on a personal level.</p> <p>Szeman’s second principle is collective stewardship. “Why can’t energy be something akin to water?” he asked. “Something that’s good for all of us, that is managed and protected effectively?”</p> <p>Szeman also stressed the importance of equality and ethics. Using new forms of energy to bring social progress to developing nations, and taking into account the “moral differences” between using energy for “elementary needs we all have for food, water, and the basic essentials of life” versus the “surplus material and immaterial desires that energy quite literally feeds and fuels” are both critical parts of what he sees as a forward-thinking — and fair — energy transition.</p> <p>The energy sources making up this transition should be diverse and sustainable, and finally, the energy-consuming populace, Szeman said, needs to rethink its idea of growth.</p> <p>“In the after-oil economy, growth and development are tied to the social values articulated above and joined to a new ethics of resilience and sustainability,” he said.</p> <p><strong>The cultural toolbox</strong></p> <p>A panel discussion moderated by Ghosn followed Szeman’s talk. Rosalind Williams, the Bern Dibner Professor of the History of Science and Technology at the MIT Program on Science, Technology, and Society, responded to and challenged Szeman’s proposition, taken from novelist Amitav Ghosh, that, even in “an era in which oil has been so important … there are very few novels that speak about oil, that address it directly.”</p> <p>One of the works she noted was Herman Melville’s 1851 novel “Moby Dick,” which she called “the<em> </em>oil-based novel.” “The age of sail and wind is turning into the age of oil before your eyes, because you’re chasing whales, and you’re chasing them for their oil,” she said. She pointed in particular to the urgency and detail Melville lent the scenes in which the crew distills the whale blubber to extract oil from it.</p> <p>Caroline Jones, a professor of art history in the School of Architecture and Planning, discussed a different medium: the visual arts. In the 1960s in particular, she said, “You do begin to get a visualization, or a visibility … for the complex social costs and institutions and infrastructures that are built around energy extraction.”</p> <p>She noted Hans Haacke’s 1981 sculpture “Creating Consent,” which consisted of an oil barrel with television antennae attached. Through this piece, Jones says, Haacke created&nbsp;“an entire narrative around oil industries, extraction industries, funding culture precisely to make their pollution invisible.”</p> <p>This message dovetails with Szeman’s comments that art and culture are a product of “the forms and conventions that shape the modern way of thinking.” At the same time, though, works like those Williams and Jones mentioned have the power to turn the lens and interrogate those very forms and conventions.</p> <p>According to Szeman, this act is critical to a successful energy transition. “We need to reexamine the cultural forms that came to life when energy was cheap and abundant,” he said. “Transitioning from oil to another energy source will entail the unmaking and remaking of our social worlds.”</p> <p>This talk was&nbsp;one in a series of MITEI seminars supported by IHS Markit.</p> Left to right: Rania Ghosn, Rosalind Williams, and Caroline Jones joined guest lecturer Imre Szeman for a panel discussion after his talk on ways to shift from “petroculture.”Photo: Deirdre Carson/MIT Energy InitiativeSchool of Architecture and Planning, Architecture, Climate change, Energy, Oil and gas, MIT Energy Initiative, Renewable energy, Sustainability, SHASS, Program in STS, History of science MIT physicist Rainer Weiss shares Nobel Prize in physics LIGO inventor and professor emeritus of physics recognized “for decisive contributions to the LIGO detector and the observation of gravitational waves.” Tue, 03 Oct 2017 06:12:32 -0400 Jennifer Chu | MIT News Office <p>Rainer Weiss ’55, PhD ’62, professor emeritus of physics at MIT, has won the Nobel Prize in physics for 2017. Weiss won half of the prize, with the other half of the award shared by Kip S. Thorne, professor emeritus of theoretical physics at Caltech, and Barry C. Barish, professor emeritus of physics at Caltech.</p> <p>The Nobel Foundation, in its announcement this morning, cited the physicists <em>"</em>for decisive contributions to the LIGO detector and the observation of gravitational waves.”</p> <p>“We are immensely proud of Rai Weiss, and we also offer admiring best wishes to his chief collaborators and the entire LIGO team,” says MIT President L. Rafael Reif. “The creativity and rigor of the LIGO experiment constitute a scientific triumph; we are profoundly inspired by the decades of ingenuity, optimism, and perseverance that made it possible. It is especially sweet that Rai Weiss not only served on the MIT faculty for 37 years, but is also an MIT graduate. Today’s announcement reminds us, on a grand scale, of the value and power of fundamental scientific research and why it deserves society’s collective support.”</p> <p>At a press conference held today at MIT, Weiss credited his hundreds of colleagues who have helped to push forward the search for gravitational waves.</p> <p>“The discovery has been the work of a large number of people, many of whom played crucial roles,” Weiss said. “I view receiving this [award] as sort of a symbol of the various other people who have worked on this.”</p> <p>In describing what the award means to him in a larger context, Weiss said: “This prize and others that are given to scientists is an affirmation by our society of [the importance of] gaining information about the world around us from reasoned understanding of evidence."</p> <div class="cms-placeholder-content-video"></div> <p><strong>Listening for a wobble</strong></p> <p>On Sept. 14, 2015, at approximately 5:51 a.m. EDT, a gravitational wave — a ripple from a distant part of the universe — passed through the Earth, generating an almost imperceptible, fleeting wobble in the world that would have gone completely unnoticed save for two massive, identical instruments, designed to listen for such cosmic distortions.</p> <p>The Laser Interferometer Gravitational-wave Observatory, or LIGO, consists of two L-shaped interferometers, each 4 kilometers in length, separated by 1,865 miles. On Sept. 14, 2015, scientists picked up a very faint wobble in the instruments and soon confirmed that the interferometers had been infinitesimally stretched — by just one-ten-thousandth the diameter of a proton — and that this minuscule distortion arose from a passing gravitational wave.</p> <p>The LIGO Scientific Collaboration, with the Caltech-MIT LIGO Laboratory and more than 1,000 scientists at universities and observatories around the world, confirmed the signal as the first direct detection of a gravitational wave by an instrument on Earth. The scientists further decoded the signal to determine that the gravitational wave was the product of a violent collision between two massive black holes 1.3 billion years ago.</p> <p>The momentous result confirmed the theory of general relativity proposed by Albert Einstein, who almost exactly 100 years earlier had predicted the existence of gravitational waves but assumed that they would be virtually impossible to detect from Earth. Since this first discovery, LIGO has detected three other gravitational wave signals, also generated by pairs of spiraling, colliding black holes; the most announced of a detection came <a href="">just last week</a>.</p> <p>“We are incredibly proud of Rai and his colleagues for their vision and courage that led to this great achievement,” says Michael Sipser, the Donner Professor of Mathematics and dean of the School of Science at MIT.&nbsp;“It is a wonderful day for them, for MIT, for risk-taking and boldness, and for all of science.”</p> <p><strong>A gravitational blueprint</strong></p> <p>The detection was an especially long-awaited payoff for Weiss, who came up with the initial design for LIGO some 50 years ago. He has since been instrumental in shaping and championing the idea as it developed from a desktop prototype to LIGO’s final, observatory-scale form.</p> <p>In 1967, Weiss, then an assistant professor of physics at MIT, was asked by his department to teach an introductory course in general relativity — a subject he knew little about. A few years earlier, the American physicist Joseph Weber had claimed to have made the first detection of gravitational waves, using resonant bars — long, aluminum cylinders that should ring at a certain frequency in response to a gravitational wave. When his students asked him to explain how these Weber bars worked, Weiss found that he couldn’t.</p> <p>No one in the scientific community had been able to replicate Weber’s results. Weiss had a very different idea for how to do it, and assigned the problem to his students, instructing them to design the simplest experiment they could to detect a gravitational wave. Weiss himself came up with a design: Build an L-shaped interferometer and shine a light down the length of each arm, at the end of which hangs a free-floating mirror. The lasers should bounce off the mirrors and head back along each arm, arriving where they started at the exact same time. If a gravitational wave passes through, it should “stretch” or displace the mirrors ever so slightly, and thus change the lasers’ arrival times.</p> <p>Weiss refined the idea over a summer in MIT’s historic Building 20, a wooden structure built during World War II to develop radar technology. The building, meant to be temporary and known to many as the “Plywood Palace,” lived on to germinate and support innovative, high-risk projects. During that time, Weiss came to the conclusion that his design could indeed detect gravitational waves, if built to large enough dimensions. His design would serve as the essential blueprint for LIGO.</p> <p><strong>An observatory takes shape</strong></p> <p>To test his idea, Weiss initially built a 1.5-meter prototype. But to truly detect a gravitational wave, the instrument would have to be several thousand times longer: The longer the interferometer’s arms, the more sensitive its optics are to minute displacements.</p> <p>To realize this audacious design, Weiss teamed up in 1976 with noted physicist Kip Thorne, who, based in part on conversations with Weiss, soon started a gravitational wave experiment group at Caltech. The two formed a collaboration between MIT and Caltech, and in 1979, the late Scottish physicist Ronald Drever, then of Glasgow University, joined the effort at Caltech. The three scientists — who became the co-founders of LIGO — worked to refine the dimensions and scientific requirements for an instrument sensitive enough to detect a gravitational wave.</p> <p>Barry Barish soon joined the team as first a principal investigator, then director of the project, and was instrumental in securing funding for the audacious project, and bringing the detectors to completion.</p> <p>After years of fits and starts in research and funding, the project finally received significant and enthusiastic backing from the National Science Foundation, and in the mid-1990s, LIGO broke ground, erecting its first interferometer in Hanford, Washington, and its second in Livingston, Louisiana.</p> <p>Prior to making their seminal detection two years ago, LIGO’s detectors required years of fine-tuning to improve their sensitivity. During this time, Weiss not only advised on scientific quandaries but also stepped in to root out problems in the detectors themselves. Weiss is among the few to have walked the length of the interferometers’ tunnels in the space between LIGO’s laser beam tube and its encasement. Inspecting the detectors in this way, Weiss would often discover minute cracks, tiny shards of glass, and even infestations of wasps, mice, and black widow spiders, which he would promptly deal with.</p> <p><strong>A cosmic path</strong></p> <p>Weiss was born in 1932 in tumultuous Berlin. When his mother, Gertrude Loesner, was pregnant with Weiss, his father, neurologist Frederick Weiss, was abducted by the Nazis for testifying against a Nazi doctor. He was eventually released with the help of Loesner’s family. The young family fled to Prague and then emigrated to New York City, where Weiss grew up on Manhattan’s Upper West Side, cultivating a love for classical music and electronics, and making a hobby of repairing radios.</p> <p>After graduating high school, he went to MIT to study electrical engineering, in hopes of finding a way to quiet the hiss heard in shellac records. He later switched to physics, but then dropped out of school in his junior year, only to return shortly after, taking a job as a technician in Building 20. There, Weiss met physicist Jerrold Zacharias, who is credited with developing the first atomic clock. Zacharias encouraged and supported Weiss in finishing his undergraduate degree in 1955 and his PhD in 1962.</p> <p>Weiss spent some time at Princeton University as a postdoc, where he developed experiments to test gravity, before returning to MIT as an assistant professor in 1964. In the midst of his work in gravitational wave detection, Weiss also investigated and became a leading researcher in cosmic microwave background radiation — thermal radiation, found in the microwave band of the radio spectrum, that is thought to be a diffuse afterglow from the Big Bang.</p> <p>In 1976, Weiss was appointed to oversee a scientific working group for NASA’s Cosmic Background Explorer (COBE) satellite, which launched in 1989 and went on to precisely measure microwave radiation and its tiny, quantum fluctuations. Weiss was co-founder and chair of the science working group for the mission, whose measurements helped support the Big Bang theory of the universe. COBE’s findings earned two of its principal investigators the Nobel Prize in physics in 2006.</p> <p>Weiss has received numerous awards and honors, including the Medaille de l’ADION, the 2006 Gruber Prize in Cosmology, and the 2007 Einstein Prize of the American Physical Society. He is a fellow of the American Association for the Advancement of Science, the American Academy of Arts and Sciences, and the American Physical Society, as well as a member of the National Academy of Sciences. In 2016, Weiss received a Special Breakthrough Prize in Fundamental Physics, the Gruber Prize in Cosmology, the Shaw Prize in Astronomy, and the Kavli Prize in Astrophysics, all shared with Drever and Thorne. Most recently, Weiss shared the Princess of Asturias Award for Technical and Scientific Research with Thorne, Barry Barish of Caltech, and the LIGO Scientific Collaboration.</p> Rainer Weiss at home early this morning, after learning that he has won the 2017 Nobel Prize in physics.Photo: M. Scott BrauerAstrophysics, awards, Awards, honors and fellowships, LIGO, Black holes, Faculty, History, History of science, History of MIT, Kavli Institute, National Science Foundation (NSF), Nobel Prizes, Physics, Research, School of Science, Space, astronomy and planetary science Entering the animal world In a history seminar, engineering students explore shifting ideas about animal intelligence and human uses of animals throughout the ages. Fri, 19 May 2017 16:05:01 -0400 Meg Murphy | School of Engineering <p>On a field trip, Harriet Ritvo and her MIT students went to look at preserved animals on public display, or stored as lab specimens, in collections housed at Harvard University. They encountered hundreds of species, some up close: touching the wings of a pickled bat, the silky fur of a mink, and the sharp claws of a lynx and a lion.</p> <p>Largely from the MIT School of Engineering, the students were part of a 14-person seminar on history and anthropology known as 21H.380 /21A.411/21H.980 (People and Other Animals). The class explores topics like how ideas about animal intelligence and agency have shifted over time, the human moral obligations to animals, and the limits imposed on the use of animals.</p> <p>Ritvo, the Arthur J. Conner Professor of History at MIT, and a pioneer in the field of animal-human cultural studies, divides the students’ explorations into units, including the history of hunting, the domestication of livestock, and the exploitation of animal labor.</p> <p>On this day’s outing, the aim was to “see how dead animals are displayed, and see behind-the-scenes how the displays are produced,” said Ritvo to her class. “We’ll see collections of dead specimens in various forms — stuffed, skins, skeletons.”</p> <p><strong>Fascinated by preservation</strong></p> <p>Zachary Bierstedt, an MIT senior in the Department of Aeronautics and Astronautics (AeroAstro), took a hands-on approach, jumping at the chance to handle the only mammal capable of sustained flight. &nbsp;</p> <p>“I am curious largely about what the preservation process actually does to the specimens,” said Bierstedt, as he lifted a bat, labelled <em>Phylbostomus hastatus panamensis</em> (great spear-nosed bat), from its jar. He spread its expansive wings. He ran a finger along the thick hair on its torso. “I was not expecting it to be so furry,” said Bierstedt.</p> <p>Access to the normally locked doors of labs and inventory within Harvard’s Museum of Comparative Zoology, was facilitated by Mark Omura, a curatorial staff member. He gathered students along a long metal table covered primarily in study skins and detailed the preservation process, explaining the systematic organization of the collection, one of the most extensive in the world.</p> <p>Moving to a cavernous storage room, he let students examine the additional specimen. Bierstedt lingered by a lion skin to feel its well-preserved claws.</p> <p>On another stop, MIT senior Alexa Garcia, a biological engineering major, took in the public displays in the Great Mammal Hall, a Victorian-era gallery in the Harvard Museum of Natural History.</p> <p>“We are learning to think about how people have related to other species over time,” said Garcia. She walked by glass cases holding a full-sized giraffe and camels. Hanging from the rafters were large skeletons of a sperm whale, a fin whale, and a right whale. “I find it very useful,” she added, perhaps all too aware of how new technologies developed at MIT, like gene editing, might alter our definition of natural life.</p> <p>MIT senior Veronica Padron, an AeroAstro major, stopped cold in front of some exhibits, snapping photographs. She zoomed in on a jellyfish in "Sea Creatures in Glass" — and then cried out as a hippo and a zebra rose into view as part of the Tropical Forest area.</p> <p>Asked about her experience in the class, Padron said: “In most engineering classes, you are told how things work. You apply principles. This class is more about interpreting meaning. We employ a different style of exploration and discussion. It balances us out.”</p> <p>The field trip wrapped up in the Glass Flowers Exhibit, a collection of models of more than 800 plant species. Students quickly handed Ritvo their final research papers before heading back to MIT. Watching them go, Ritvo said students in science and engineering benefit greatly from the lessons that humanities and social sciences offer. “Students learn a different way of understanding the world.”</p> <p><strong>Finding a voice</strong></p> <p>Students say that their reason for taking the course is more than just about knocking out a humanities requirement. “It’s really interesting to think about things from a different perspective,” said senior Matthew Nicolai. “We have to look through various subjectivities — the animal, the human — and contend with ethical issues. I don’t really think that way, so it’s intriguing.”</p> <p>He and Christian Argenti, both mechanical engineering majors, said they read the course description, and swiftly convinced two additional engineering students in their fraternity, Delta Kappa Epsilon, to take the course with them.</p> <p>In one class, they had delved into the ethical treatment of animals. A showing of the 1949 French documentary, "Le Sang des Bêtes" ("Blood of the Beasts"), featured the unvarnished butchering of horses, cattle, and sheep at a slaughterhouse.</p> <p>“There isn’t any attempt to make the killing ambiguous,” Ritvo said about the film. “What does it suggest about how things are perceived differently in different times and places?” Debate had ensued around their MIT seminar table as the scent of hyacinths drifted in from a nearby window overlooking the Charles.</p> <p>Marcus Urann, a junior studying mechanical engineering, appreciates such moments of dialogue. “You can get lost in the mix in engineering. We have large classes with hours of lectures. In this class, we meet weekly and discuss issues in depth,” he said. “It gives you a way to voice an opinion.”</p> Left to right: Matthew Nicolai, Dallace Francis, and Christian Argenti, MIT seniors in the Department of Mechanical Engineering and members of the fraternity Delta Kappa Epsom, took the MIT History seminar, People and Other Animals, together. Photo: Gretchen ErtlClasses and programs, History, Animals, Students, Undergraduate, Humanities, Evolution, History of science, SHASS, School of Engineering Q&amp;A: On the future of human-centered robotics Professor David Mindell, who researches the interaction between automation and human behavior, discusses the interdependence of people, robots, and infrastructure. Mon, 15 May 2017 15:10:01 -0400 School of Humanities, Arts, and Social Sciences <p><em>Science and technology are essential tools for innovation, and to reap their full potential, we also need to articulate and solve the many aspects of today’s global issues that are rooted in the political, cultural, and economic realities of the human world.&nbsp;With that mission in mind, MIT's School of Humanities, Arts, and Social Sciences has launched <a href="" target="_blank">The Human Factor</a> — an ongoing series of stories and interviews that highlight research on the human dimensions of global challenges. Contributors to this series also share ideas for cultivating the multidisciplinary collaborations needed to solve the major civilizational issues of our time.</em></p> <p><em>David Mindell, the Frances and David Dibner Professor of the History of Engineering and Manufacturing and Professor of Aeronautics and Astronautics at MIT, researches the intersections of human behavior, technological innovation, and automation. Mindell&nbsp;is the author of five acclaimed books, most recently "Our Robots, Ourselves: Robotics and the Myths of Autonomy" (Viking, 2015).&nbsp;He is also&nbsp;the co-founder of Humatics Corporation, which develops technologies for human-centered automation. SHASS Communications recently asked him to share his thoughts on the relationship of robotics to human activities, and the role of multidisciplinary research in solving complex global issues. </em></p> <p><strong>Q:</strong> A major theme in recent political discourse has been the perceived impact of robots and automation on the United States labor economy. In your research into the relationship between human activity and robotics, what insights have you gained that inform the future of human jobs, and the direction of technological innovation?</p> <p><strong>A:</strong>&nbsp;In looking at how people have designed, used, and adopted robotics in extreme environments like the deep ocean, aviation, or space, my most recent work shows how robotics and automation carry with them human assumptions about how work gets done, and how technology alters those assumptions. For example, the U.S. Air Force’s Predator drones were originally envisioned as fully autonomous — able to fly without any human assistance. In the end, these drones require hundreds of people to operate.</p> <p>The new success of robots will depend on how well they situate into human environments. As in chess, the strongest players are often the combinations of human and machine. I increasingly see that the three critical elements are people, robots, and infrastructure — all interdependent.</p> <p><strong>Q:</strong> In your recent book "Our Robots, Ourselves," you describe the success of a human-centered robotics,&nbsp;and explain why it is the more promising research direction — rather than research that aims for total robotic autonomy. How is your perspective being received by robotic engineers and other technologists, and do you see examples of research projects that are aiming at human-centered robotics?</p> <p><strong>A:</strong> One still hears researchers describe full autonomy as the only way to go; often they overlook the multitude of human intentions built into even the most autonomous systems, and the infrastructure that surrounds them. My work describes situated autonomy,&nbsp;where autonomous systems can be highly functional within human environments such as factories or cities. Autonomy as a means of moving through physical environments has made enormous strides in the past ten years. As a means of moving through human environments, we are only just beginning. The new frontier is learning how to design the relationships between people, robots, and infrastructure. We need new sensors, new software, new ways of architecting systems.</p> <p><strong>Q:</strong> What can the study of the history of technology teach us about the future of robotics?</p> <p><strong>A:</strong> The history of technology does not predict the future, but it does offer rich examples of how people build and interact with technology, and how it evolves over time. Some problems just keep coming up over and over again, in new forms in each generation. When the historian notices such patterns, he can begin to ask: Is there some fundamental phenomenon here? If it is fundamental, how is it likely to appear in the next generation? Might the dynamics be altered in unexpected ways by human or technical innovations?</p> <p>One such pattern is how autonomous systems have been rendered less autonomous when they make their way into real world human environments. Like the Predator&nbsp;drone, future military robots will likely be linked to human commanders and analysts in some ways as well. Rather than eliding those links, designing them to be as robust and effective as possible is a worthy focus for researchers’ attention.</p> <p><strong>Q</strong>: MIT President L. Rafael Reif has said that the solutions to today’s challenges depend on marrying advanced technical and scientific capabilities with a deep understanding of the world’s political, cultural, and economic realities. What barriers do you see to multidisciplinary, sociotechnical collaborations, and how can we overcome them?</p> <p><strong>A</strong>:&nbsp;I fear that as our technical education and research continues to excel, we are building human perspectives into technologies in ways not visible to our students. All data, for example, is socially inflected, and we are building systems that learn from those data and act in the world. As a colleague from Stanford recently observed, go to Google image search and type in “Grandma” and you’ll see the social bias that can leak into data sets —&nbsp;the top results all appear white and middle class.</p> <p>Now think of those data sets as bases of decision making for vehicles like cars or trucks, and we become aware of the social and political dimensions that we need to build into systems to serve human needs. For example, should driverless cars adjust their expectations for pedestrian behavior according to the neighborhoods they’re in?</p> <p>Meanwhile, too much of the humanities has developed islands of specialized discourse that is inaccessible to outsiders. I used to be more optimistic about multidisciplinary collaborations to address these problems. Departments and schools are great for organizing undergraduate majors and graduate education, but the old two-cultures&nbsp;divides remain deeply embedded in the daily practices of how we do our work. I’ve long believed MIT needs a new school to address these synthetic, far-reaching questions and train students to think in entirely new ways.</p> <p><em>Interview prepared by MIT SHASS Communications<br /> Editorial team: Emily Hiestand (series editor), Daniel Evans Pritchard</em><br /> &nbsp;</p> "The new frontier is learning how to design the relationships between people, robots, and infrastructure," says David Mindell, the Dibner Professor of the History of Engineering and Manufacturing, and a professor of aeronautics and astronautics. "We need new sensors, new software, new ways of architecting systems."Photo: Len RubensteinSHASS, Economics, Faculty, History of science, Robotics, Technology and society, Autonomous vehicles, Collaboration, Robots, Aeronautical and astronautical engineering, Artificial intelligence, Jobs, School of Engineering Explained: Neural networks Ballyhooed artificial-intelligence technique known as “deep learning” revives 70-year-old idea. Fri, 14 Apr 2017 11:41:32 -0400 Larry Hardesty | MIT News Office <p>In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”</p> <p>Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s <a href=";q=%22first+cognitive+science+department+in+history%22#v=snippet&amp;q=%22first%20cognitive%20science%20department%20in%20history%22&amp;f=false">sometimes called</a> the first cognitive science department.</p> <p>Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.</p> <p>The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.</p> <p>“There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s <a href="">Center for Brains, Minds, and Machines</a>. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”</p> <p><strong>Weighty matters</strong></p> <p>Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.</p> <p>Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.</p> <p>To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.</p> <p>When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.</p> <p><strong>Minds and machines</strong></p> <p>The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they weren’t arranged into layers, and the researchers didn’t specify any training mechanism. What McCullough and Pitts showed was that a neural net could, in principle, compute any function that a digital computer could. The result was more neuroscience than computer science: The point was to suggest that the human brain could be thought of as a computing device.</p> <p>Neural nets continue to be a valuable tool for neuroscientific research. For instance, particular <a href="">network layouts</a> or <a href="">rules</a> for adjusting weights and thresholds have reproduced observed features of human neuroanatomy and cognition, an indication that they capture something about how the brain processes information.</p> <p>The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. The Perceptron’s design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, sandwiched between input and output layers.</p> <p>Perceptrons were an active area of research in both psychology and the fledgling discipline of computer science until 1959, when Minsky and Papert published a book titled “Perceptrons,” which demonstrated that executing certain fairly common computations on Perceptrons would be impractically time consuming.</p> <p>“Of course, all of these limitations kind of disappear if you take machinery that is a little more complicated — like, two layers,” Poggio says. But at the time, the book had a chilling effect on neural-net research.</p> <p>“You have to put these things in historical context,” Poggio says. “They were arguing for programming — for languages like Lisp. Not many years before, people were still using analog computers. It was not clear at all at the time that programming was the way to go. I think they went a little bit overboard, but as usual, it’s not black and white. If you think of this as this competition between analog computing and digital computing, they fought for what at the time was the right thing.”</p> <p><strong>Periodicity</strong></p> <p>By the 1980s, however, researchers had developed algorithms for modifying neural nets’ weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance.</p> <p>But intellectually, there’s something unsatisfying about neural nets. Enough training may revise a network’s settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won’t answer that question.</p> <p>In recent years, computer scientists have begun to come up with <a href="">ingenious</a> methods for <a href="">deducing</a> the analytic strategies adopted by neural nets. But in the 1980s, the networks’ strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that’s based on some very clean and elegant mathematics.</p> <p>The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.</p> <p>Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.</p> <p><strong>Under the hood</strong></p> <p>The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in <a href="">Theoretical Frameworks for Intelligence</a>. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.</p> <p>The <a href="">first part</a>, which was published last month in the <em>International Journal of Automation and Computing</em>, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts <a href="">two</a> and <a href="">three</a>, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.</p> <p>There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.</p> Most applications of deep learning use “convolutional” neural networks, in which the nodes of each layer are clustered, the clusters overlap, and each cluster feeds data to multiple nodes (orange and green) of the next layer. Image: Jose-Luis Olivares/MITExplained, Artificial intelligence, Brain and cognitive sciences, Computer modeling, Computer science and technology, Machine learning, Neuroscience, History of science, History of MIT, Center for Brains Minds and Machines Tim Berners-Lee wins $1 million Turing Award CSAIL researcher honored for inventing the web and developing the protocols that spurred its global use. Tue, 04 Apr 2017 07:00:00 -0400 Adam Conner-Simons | CSAIL <p>MIT Professor Tim Berners-Lee, the researcher who invented the World Wide Web and is one of the world’s most influential voices for online privacy and government transparency, has won the most prestigious honor in computer science, the Association for Computing Machinery (ACM) <a href="" target="_blank">A.M. Turing Award</a>. Often referred to as “the Nobel Prize of computing,” the award comes with a $1 million prize provided by Google.</p> <p>In its announcement today, ACM cited Berners-Lee for “inventing the World Wide Web, the first web browser, and the fundamental protocols and algorithms allowing the web to scale.” This year marks the 50th anniversary of the award.</p> <p>A principal investigator at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) with a joint appointment in the Department of Electrical Engineering and Computer Science, Berners-Lee conceived of the web in 1989 at the European Organization for Nuclear Research (CERN) as a way to allow scientists around the world to share information with each other on the internet. He introduced a naming scheme (URIs), a communications protocol (HTTP), and a language for creating webpages (HTML). His open-source approach to coding the first browser and server is often credited with helping catalyzing the web’s rapid growth.</p> <p>“I’m humbled to receive the namesake award of a computing pioneer who showed that what a programmer could do with a computer is limited only by the programmer themselves,” says Berners-Lee, the 3Com Founders Professor of Engineering at MIT. “It is an honor to receive an award like the Turing that has been bestowed to some of the most brilliant minds in the world.”</p> <p>Berners-Lee is founder and director of the <a href="" target="_blank">World Wide Web Consortium</a> (W3C), which sets technical standards for web development, as well as the <a href="" target="_blank">World Wide Web Foundation</a>, which aims to establish the open web as a public good and a basic right. He also holds a professorship at Oxford University.</p> <p>As director of CSAIL’s <a href="" target="_blank">Decentralized Information Group</a>, Berners-Lee has developed data systems and privacy-minded protocols such as <a href="" target="_self">“HTTP with Accountability”</a> (HTTPA), which monitors the transmission of private data and enables people to examine how their information is being used. He also leads <a href="" target="_blank">Solid</a> (“social linked data”), a project to re-decentralize the web that allows people to control their own data and make it available only to desired applications.</p> <p>"Tim Berners-Lee's career — as brilliant and bold as they come — exemplifies MIT's passion for using technology to make a better world,” says MIT President L. Rafael Reif. “Today we celebrate the transcendent impact Tim has had on all of our lives, and congratulate him on this wonderful and richly deserved award."</p> <p>While Berners-Lee was initially drawn to programming through his interest in math, there was also a familial connection: His parents met while working on the Ferranti Mark 1, the world’s first commercial general-purpose computer. Years later, he wrote a program called Enquire to track connections between different ideas and projects, indirectly inspiring what later became the web.</p> <p>“Tim’s innovative and visionary work has transformed virtually every aspect our lives, from communications and entertainment to shopping and business,” says CSAIL Director Daniela Rus. “His work has had a profound impact on people across the world, and all of us at CSAIL are so very proud of him for being recognized with the highest honor in computer science.”</p> <p>Berners-Lee has received multiple accolades for his technical contributions, from being knighted by Queen Elizabeth to being named one of <em>TIME</em> magazine’s “100 Most Important People of the 20th Century.” He will formally receive the Turing Award during the ACM’s annual banquet June 24 in San Francisco.</p> <p>Past Turing Award recipients who have taught at MIT include Michael Stonebraker (2014), Shafi Goldwasser and Silvio Micali (2013), Barbara Liskov (2008), Ronald Rivest (2002), Butler Lampson (1992), Fernando Corbato (1990), John McCarthy (1971) and Marvin Minsky (1969).</p> Tim Berners-Lee was honored with the Turing Award for his work inventing the World Wide Web, the first web browser, and "the fundamental protocols and algorithms [that allowed] the web to scale."Photo: Henry ThomasAwards, honors and fellowships, Faculty, Internet, Computer Science and Artificial Intelligence Laboratory (CSAIL), History of science, Privacy, School of Engineering, Computer science and technology, Invention Celebrating Pauline Morrow Austin, a founder of radar meteorology MIT faculty, friends, and family gathered to remember Austin&#039;s life and commemorate her contributions to science with the unveiling of an exhibit in EAPS. Tue, 31 Jan 2017 19:00:01 -0500 Lauren Hinkel | Oceans at MIT <div> </div> <p>Modern meteorology would not be what it is today without contributions from Pauline (Polly) Morrow Austin PhD ’42, a longtime director of MIT’s Weather and Radar Research Project. Last month, MIT recognized her influence on the field of weather radar with a centennial celebration of her birth. Throughout the day, MIT faculty, students, and Austin's friends and family gathered in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) to remember her work, share personal moments, and discuss current weather and climate-related research.</p> <p>The events of the day built to the unveiling of an exhibit on the 16th&nbsp;floor showcasing Austin’s meticulous application of radar to weather study and to MIT’s role in its development. A generous gift from an MIT alum and one of Austin’s longtime colleagues made the exhibit — as well as new equipment for the&nbsp;<a href="" target="_blank">Synoptic Meteorology Lab</a>, including a weather camera, weather stations, and a display screen for meteorological data — possible, all in Austin's honor.</p> <div> <p>“The glass ceiling was not something that Polly acknowledged and continuously broke,” said Austin's daughter, Doris Austin Lerner.</p> </div> <p>As one of the first women to graduate from MIT with a doctorate in physics and work in the field of weather radar, Austin set the bar high. She came to MIT in 1939 with degrees in mathematics and physics, and began working with&nbsp;<a href="" target="_blank">Professor Julius Adams Stratton</a>, a future president of MIT, on electromagnetic theory and radar, which was developed in England during World War II. In order to develop the technology further, radar research was moved to MIT, where Austin became involved.</p> <p>She joined the Radiation Laboratory (Rad Lab) at MIT, studying the reflection of megahertz radiowaves off of the ionosphere to extend Long Range Navigation (<a href="" target="_blank">LORAN</a>) from ground waves to skywaves, shared Earle Williams, principal research engineer in the Department of Civil and Environmental Engineering, during the course of the day. This classified work was crucial to wartime efforts and brought Austin recognition from <em>The New York Times</em> in an article, “<a href="" target="_blank">Special Roles Vital to Nation Filled by Women Scholars</a>.”</p> <p>After completing her thesis work on the “Propagation of electromagnetic pulses in the ionosphere,” and with Stratton’s encouragement, Austin joined MIT’s Weather Radar Research Project at its inception in 1946. This was the first critical investigation into how radar technology could be used to monitor weather; she focused on comparing measurements of actual rainfall with those found using radar. “She really had a love affair of measuring rainfall with radar and doing it quantitatively, and she did that for decades. The seeds of that came from the Rad Lab, but she was really working on that until 2004 when she was at MIT the last time,” said Williams. Later, Austin went on to direct the Weather Radar Research Project until she retired. She <a href="" target="_blank">died in 2011</a> at age 94.</p> <div> <p>The scope of Austin's research&nbsp;extended further yet, and arguably she’s best known for her work on a weather radar phenomenon called the “<a href="" target="_blank">bright band</a>.” This is a feature seen on radar that delineates between rain and snow as you go vertically in the atmosphere, and helps with weather pattern classification. Her work here is still referenced today and can literally be seen across America. “The reason that the whole United States is covered with s-band radar, it’s probably safe to say, [is] because of Polly Austin. In Europe, there are many networks of c-band radars, but Polly knew that if you wanted accurate [rain] measurements, you had to go with s-band,” said Williams. Austin was also instrumental in installing radomes on MIT's Green Building (Building 54), home of EAPS.</p> </div> <p>Austin also chaired the American Meteorological Society’s Committee on Radar Meteorology, and in 1974, she was the first woman to be elected a councilor.</p> <p>Austin, as her students remembered her, was more than a brilliant mind; she was a mentor and scrupulous advisor. Looking on a photo of Austin, Robert A. Houze Jr., professor of atmospheric science at the University of Washington, remarked, “There’s the intelligence in the eyes, the smile, and the soft look that sort of hides the fact she was as tough of an advisor as you’d ever imagine, which was to my great benefit.” She questioned results with a fine-toothed comb, provided constructive feedback, and demanded clarity of thought in scientific writing — an experience each of her former students was all too familiar with. But all present at EAPS on Pauline Austin Day expressed a high level of appreciation for the opportunity to work with her. Houze summed it up: “I’m not exaggerating that this mentorship has had a lasting influence that goes right up through today.”</p> <p>Several of her students, children, and contemporaries shared moments spent with Austin and described MIT's involvement with radar development. These included Howard Bluestein, Robert C. Copeland, Kerry Emanuel, Robert A. Houze Jr., Lodovica Illari, Frank D. Marks Jr., William M. Silver, Melvin L. Stone, Earle Williams, Marilyn M. Wolfson, and Austin's daughters Doris Austin Lerner and Carol West.</p> <div> <p>Following remembrances, attendees of the celebration explored current departmental research through a poster session. Presenters included Vince Agard, Brian Green, Mukund Gupta, Michael McClellan, Diamilet Perez-Betancourt, Madeleine Youngs, Emily Zakem and Maria Zawadowicz. They also toured the Synoptic Lab, posed for photos in front of Austin’s radomes on the roof of the Green Building, and watched as her daughters unveiled the new exhibit honoring Austin and MIT’s radar work.</p> </div> <p>Thinking back on Austin, William Silver of the MIT Weather Radar Research Project described her as “mild mannered” — as many before him had done — but noted that Superman’s Clark Kent shared a similar disposition. However, he noted that Austin’s power emanated from her intellect: “Now, that great laboratory is long gone. All that remains, the iconic radomes on the roof. … And as that great laboratory fades from the memory of this building, this Institute, let’s at least not forget Pauline Austin, Class of ’42, PhD physics — one of the founders of radar meteorology [with a] mind of steel.”</p> Pauline Morrow Austin Photo: MIT Library ArchivesSpecial events and guest speakers, Physics, History of MIT, Earth and atmospheric sciences, EAPS, Radar, Weather, Meteorology, Women in STEM, History of science, Alumni/ae Joy Buolamwini wins national contest for her work fighting bias in machine learning Media Lab graduate student selected from over 7,300 entrants, awarded $50,000 scholarship in contest inspired by the film &quot;Hidden Figures.&quot; Tue, 17 Jan 2017 18:10:00 -0500 MIT Media Lab <p>When <a href="" target="_blank">Joy Buolamwini</a>, an MIT master's candidate in media arts and sciences, sits in front a mirror, she sees a black woman in her 20s. But when her photo is run through recognition software, it does not recognize her face. A seemingly neutral machine programmed with algorithms-codified processes simply fails to detect her features. Buolamwini is, she says, "on the wrong side of computational decisions" that can lead to exclusionary and discriminatory practices and behaviors in society.</p> <p>That phenomenon, which Buolamwini calls the “coded gaze,"  is what motivated her late last year to launch the <a href="" target="_blank">Algorithmic Justice League</a> (AJL) to highlight such bias through provocative media and interactive exhibitions; to provide space for people to voice concerns and experiences with coded discrimination; and to develop practices for accountability during the design, development, and deployment phases of coded systems.</p> <p>That work is what contributed to the Media Lab student earning the grand prize in the professional category of <a href="" target="_blank">The Search for Hidden Figures</a>. The nationwide contest, created by PepsiCo and 21st Century Fox in partnership with the New York Academy of Sciences, is named for a <a href="" target="_blank">recently released film</a> that tells the real-life story of three African-American women at NASA whose math brilliance helped launch the United States into the space race in the early 1960s.</p> <div class="cms-placeholder-content-video"></div> <p>"I'm honored to receive this recognition, and I'll use the prize to continue my mission to show compassion through computation," says Buolamwini, who was born in Canada, then lived in Ghana and, at the age of four, moved to Oxford, Mississippi. She’s a two-time recipient of an Astronaut Scholarship in a program established by NASA’s Mercury 7 crew members, including late astronaut John Glenn, who are depicted in the film "Hidden Figures."</p> <p>The film had a big impact on Buolamwini when she saw a <a href="" target="_self">special MIT sneak preview</a> in early December: "I witnessed the power of storytelling to change cultural perceptions by highlighting hidden truths. After the screening where I met Margot Lee Shetterly, who wrote the book on which the film is based, I left inspired to tell my story, and applied for the contest. Being selected as a grand prize winner provides affirmation that pursuing STEM is worth celebrating.&nbsp;And it's an important reminder to share the stories of discriminatory experiences that necessitate the Algorithmic Justice League as well as the uplifting stories of people who come together to create a world where technology can work for all of us and drive social change."</p> <p>The Search for Hidden Figures contest attracted 7,300 submissions from students across the United States. As one of two grand prize winners, Buolamwini receives a $50,000 scholarship, a trip to the&nbsp;Kennedy Space Center&nbsp;in Florida,&nbsp;plus access to New York Academy of Sciences training materials and programs in STEM. She plans to use the prize resources to develop what she calls "bias busting" tools to help defeat bias in machine learning.</p> <p>That is the focus of her current research at the MIT Media Lab, where Buolamwini is in the <a href="">Civic Media group</a> pursuing a master's degree with an eye toward a PhD. "The Media Lab serves as a unifying thread in my journey in STEM. Until I saw the lab on TV, I didn't realize there was a place dedicated to exploring the future of humanity and technology by allowing us to indulge our imaginations by continuously asking, 'What if?'"</p> <p>Before coming to the Media Lab, Buolamwini earned a&nbsp;BS in computer science as a Stamps President's Scholar at Georgia Tech and a master's in learning and technology as a Rhodes Scholar at Oxford University. As part of her <a href="" target="_blank">Rhodes Scholar Service Year</a>, Buolamwini launched <a href="" target="_blank">Code4Rights</a> to guide young people in partnering with local organizations to develop meaningful technology for their communities. In that year, she also built upon a <a href="" target="_blank">computer science learning initiative</a> she’d created during her Fulbright fellowship in Lusaka, Zambia, to empower young people to become creators of technology. And, as an entrepreneur, she co-founded a startup <a href="" target="_blank">hair care technology company</a> and now advises an MIT-connected <a href="" target="_blank">"smart" clothing startup</a> aimed at transforming women’s health. She's also an experienced public speaker, most recently at TEDx Beacon Street, the White House, the Vatican, and the Museum of Fine Arts, Boston.</p> <p>From an early age, Buolamwini felt encouraged to aim high in STEM: "I went after my dreams, and now I continue to push myself&nbsp;beyond present barriers to create a more inclusive future. Inclusive participation matters. And, by being visible in STEM, I hope to inspire the next generation of stargazers."</p> MIT grad student Joy Buolamwini (left) meets author Margot Lee Shetterly at an MIT screening of "Hidden Figures."Photo courtesy of Joy Buolamwini. Awards, honors and fellowships, Students, Graudate, Graduate, postdoctoral, Media Lab, Diversity and inclusion, Machine learning, Algorithms, History of science, Women in STEM, SHASS, School of Architecture and Planning, NASA, Space exploration, Object recognition, History, Computer science and technology House rules Research shows how rebuilding Britain’s Houses of Parliament in the 1800s helped create clean-air laws. Thu, 12 Jan 2017 00:00:00 -0500 Peter Dizikes | MIT News Office <p>Britain’s dazzling Houses of Parliament building, constructed from 1840 until 1870, is an international icon. But the building’s greatest legacy may be something politicians and tourists don’t think about much: the clean air around it.</p> <p>That’s the implication of newly published research by MIT architectural historian Timothy Hyde, who through original archival work has reconstructed a piece of history lost in the haze of time. As his scholarship shows, Parliament’s decades-long reconstruction was so hindered by pollution — the air was eating away at the new stones being laid down — the British government convened scientific inquiries into the effects of the atmosphere on the new building.</p> <p>Those inquiries spurred new scientific research about the environment at a time when Victorian England was rapidly industrializing, and represent a first, seminal case of examining our built environment to learn more about the natural environment.</p> <p>“The Houses of Parliament project was a catalyst, because of the research that accompanied this building,” Hyde says. “The very specific realization that pollution was corroding the building even as it was being built [formed] a discovery about the environment of the modern city.”</p> <p>Hyde has reconstructed this process in an article published in the <em>Journal of Architecture</em>. The paper, “‘London particular’; the city, its atmosphere and the visibility of its objects,” reconstructs the years-long process through which government officials realized Parliament’s new limestone was being quickly degraded by the notorious soot, smoke, and grime that filled London’s air.</p> <p>The advances that flowed from this were not just scientific, Hyde says, but more broadly represented a recognition of the linkages between all the elements of urban life.</p> <p>“It really did enable a different understanding of the modern city,” says Hyde, who is the Clarence H. Blackall Career Development Associate Professor of Architectural History at MIT. Observers came to recognize cities, he thinks, “not as a collection of individual buildings, but rather as a set of interrelated causes and effects. One building, like a factory, could cause the decay of another building. And the modern city had to be thought of as trying to achieve an equilibrium between its parts.”</p> <p><strong>London’s burning</strong></p> <p>Britain’s old Parliament complex, a set of buildings dating to medieval times, was engulfed by flames on Oct. 16, 1834, when some wooden tally sticks, used for accounting, caught fire. The overnight blaze destroyed the meeting places of the House of Commons and the House of Lords, among other spaces. The event was witnessed by thousands of spectators, including two of the best-known artists of the era, Joseph Mallord William Turner and John Constable, who later depicted it in their artwork.</p> <p>With Parliament subsequently meeting in temporary quarters, the government held a high-profile competition for a new building, won by the architect Charles Barry, who developed the Gothic Revival design so familiar now to Britons and visitors to London.</p> <p>“The rebuilding of the Houses of Parliament was the single most important architectural project in the 19th century in Great Britain, and was understood as such by the public and the protagonists,” Hyde notes.</p> <p>That’s why, when the new building’s limestone quickly began to decay in the 1840s, the British government formed committees of experts to examine the problem. No other building project, in all likelihood, would have received such attention.</p> <p>“The Houses of Parliament project, because of its public nature, enabled this possibility of bringing into public view knowledge about the decaying of buildings,” Hyde says.</p> <p>Parliament had already convened the most knowledgeable people it could find to work on the project; the experts who helped select the stone of the building included geologist William Smith, creator of the iconic Geological Map of England and Wales, and Henry de la Beche, director of the Geological Survey of Great Britain. By 1846, de la Beche had submitted a report to Parliament about the general hazards of smoke pollution. In analyzing the problems of the new Parliament building more specifically, the government inquiry drew upon the pathbreaking research of chemist Robert Angus Smith, who had discovered that the air in Manchester was full of sulphuric acid, while air in the country tended to lack it.</p> <p>Angus Smith’s work led to the conclusion, by the end of the 1850s, that sulphuric “acid rain,” as it came to be called, was indeed corroding urban buildings. The inclusion of this scientific research in Parliament’s inquiries had significance beyond the completion of the building itself. Looking at the science of stone decay helped call attention to such environmental matters more broadly, and accelerated the process through which science became incorporated into new legal statutes.</p> <p>By 1875, for instance, Britain passed a new Public Health Act with articles specifically on smoke prevention, building on the kinds of research highlighted by the Houses of Parliament inquiries.</p> <p><strong>Architecture participates in modernity</strong></p> <p>To be sure, Hyde notes, such public-health regulations were gaining momentum from a variety of sources, not just the Houses of Parliament’s lengthy rebuilding process.</p> <p>“The question of public health would have moved ahead in some channels,” Hyde says. “There were already concerns about the effects of pollution on human bodies.”</p> <p>And yet, as Hyde says, the fact that pollution “had effects on buildings was a distinctive question that had consequences.” It also brought into play legal matters of “property and value,” because one building could be damaged by smoke from another one. For this reason as well, pollution problems raised legal questions that could not be ignored — and weren’t, before long. Such statutory laws have been a fundamental part of environmental rules ever since.</p> <p>All of which means that Britain’s Houses of Parliament still matter to us, but not just as a seat of power or emblem of design. Indeed, as Hyde notes in the paper, while many scholars have closely scrutinized the aesthetic meaning of the building, what we have largely missed is its environmental and legal importance.</p> <p>“Architecture participates in changing the processes of modernity,” concludes Hyde. “It does not just reflect them.”</p> Timothy Hyde, architectural historian and associate professor in the History, Theory, and Criticism (HTC) section of the MIT Department of Architecture. Photo: Tom Gearty/School of Architecture and PlanningResearch, Faculty, Architecture, School of Architecture and Planning, History, Cities, Environment, Pollution, Europe, Air pollution, Technology and society, History of science &quot;Hidden Figures&quot; screening, discussion addresses the history of black women at NASA Author and executive producer Margot Lee Shetterly explores inspiration for the film; MIT guest speakers provide additional historical context. Thu, 22 Dec 2016 13:50:01 -0500 William Litant | Department of Aeronautics and Astronautics <p>In the late 1950s and early 1960s, the United States&nbsp;was engaged in a frantic competition with the Soviet Union to launch the first Earth-orbiting satellite, place a human in space, and, ultimately, set foot on the moon. Laboring behind the scenes in this monumental technological effort were what NASA termed “colored computers,” African-American female mathematicians responsible for&nbsp;critical calculations and other technical works.</p> <p>On Dec. 25, 20th Century Fox will release the film “<a href="" target="_blank">Hidden Figures</a>,” based on the book of the same title by <a href="" target="_blank">Margot Lee Shetterly</a>. The film tells the true-life story of three of these unsung heroes whose work played a key role in NASA's space race victories. But on Dec. 8, the MIT community enjoyed a sneak preview&nbsp;screening of the film, thanks to Fox and&nbsp;the MIT Department of&nbsp;Aeronautics and Astronautics (AeroAstro), with the&nbsp;MIT Women’s and Gender Studies Program&nbsp;and the Consortium for Gender, Culture, Women, and Sexuality as co-sponsors.</p> <p>In his review of the film, <em>Variety</em>&nbsp;critic Peter Debruge noted “just how thoroughly the deck was stacked against these women.” He wrote, “‘Hidden Figures’ is empowerment cinema at its most populist, and one only wishes that the film had existed at the time it depicts.”</p> <p>Following the MIT screening at the Kendall Square Cinema, a panel comprised of Shetterly, MIT Museum Director of Collections Deborah Douglas, and Insitute Professor Sheila E. Widnall offered comments on the film and solicited input from the audience of MIT students, staff, and faculty. Recently retired NASA astronaut and AeroAstro alumna Cady Coleman '83 also addressed the audience.&nbsp;</p> <p>Shetterly, who co-produced the film, provided insights into how she was inspired to write <a href="" target="_blank">her book</a>&nbsp;after growing up with a father who worked as a research scientist&nbsp;at NASA's Langley Research Center in Hampton, Virginia. She also noted that, while the film focuses on three individuals, she learned about scores of women at NASA while doing research for her book, which was published earlier this fall.&nbsp;</p> <p>Widnall, who came to MIT as an undergraduate in 1956 and would later become the Institute’s first female engineering professor, was “shocked”&nbsp;by the fact that one of the film's protagoinsts, Mary Jackson, was told outright that she couldn't aspire to be an&nbsp;engineer&nbsp;simply because she was a woman. “At MIT, I was never told I couldn’t be an engineer. I never felt I didn’t belong,” Widnall said.&nbsp;She added that the hurdles faced by the women in the story “is a lesson we must never forget.”</p> <p>Douglas, who authored the book “American Women and Flight Since 1940” (University Press of Kentucky, 2004),&nbsp;moderated the panel. She noted that Shetterly’s work “captures a really basic aspect of aeronautical engineering in the mid-20th century: that engineering analysis relied on a lot of number crunching — not just a few quick computations with a slide rule but reams and reams of equations solved one after another, and while computers would eventually do this, the reality is that there were large cadres of human computers, mostly women, who performed these calculations.”</p> <p>AeroAstro fourth-year student Rachel Harris said she was struck by the hurdles the characters in the movie faced and how they surmounted them. “I hope that this movie can generate a similar response in the broader nation such that we can continue to identify and fix the inequalities we still face,” Harris said.</p> <p>Ashley Simon, a second-year biology major, found the film “powerful.” “I feel like it took we college students out of our comfort zone, which is the time we live in. We’ve all heard stories about the time, but to see what was happening is something totally different. The film took me on an emotional rollercoaster,” Simon said. Simon was particularly pleased that author Shetterly attended the event. “She provided us with more insight into the stories and told us more about the real women she interviewed. Margot Lee Shetterly was the piece to the puzzle that I didn’t even know was missing.”</p> <p>Douglas summed up her reaction to the event: “The most powerful thing for me was to see how deeply affecting the film was for many in the audience.&nbsp;A starving person sometimes doesn’t know how hungry he or she is until there is an abundance of food.&nbsp;From the audience comments, I think there is a very deep hunger among many at MIT for the kind of affirmation that this film provides.”</p> <p>Copies of the&nbsp;“Hidden Figures” book — including a special young readers' edition — were available at a discounted price prior to the screening, courtesy of the MIT Press.&nbsp;For those interested in diving further into the hidden histories of black women at NASA,&nbsp;the book is the next selection&nbsp;in MIT's all-community book club, MIT Reads. For information on discussion dates, visit the <a href="" target="_blank">MIT Reads&nbsp;website</a>.</p> "Hidden Figures" author and producer Margot Lee Shetterly fields a question at the MIT pre-release screening of the film based on her book. MIT Museum Director of Collections Deborah Douglas (left) and Institute Professor Sheila Widnall were also on the post-screening discussion panel.Photo: William Litant/Department of Aeronautics and AstronauticsSpecial events and guest speakers, Film and Television, Books and authors, History, NASA, Diversity and inclusion, Women in STEM, SHASS, Women, History of science, Space, astronomy and planetary science, Libraries, MIT Press 3Q: Historian Harriet Ritvo on what it means to be &quot;wild&quot; Scientists, social scientists, and humanists heed the call of the wild at MIT workshop. Fri, 04 Nov 2016 17:50:01 -0400 School of Humanities, Arts, and Social Sciences <p><em>What does "wild" mean?<strong> </strong>Scientists, social scientists, and humanists tackled this question during "Call of the Wild," a workshop convened at MIT by Harriet Ritvo, the Arthur J. Conner Professor of History at MIT, and Sally Shuttleworth, professor of English literature at Oxford University. The event was sponsored by MIT International Science and Technology Initiatives (MISTI), a program of the MIT School of Humanities, Arts and Social Sciences, and co-sponsored by Constructing Scientific Communities, a project based at Oxford and supported by the United Kingdom Arts and Humanities Research Council.</em></p> <p><em>In opening remarks, Ritvo observed that the term “wild” is receiving renewed attention from academics and popular authors alike. Even those who are critical of the term employ it frequently, she said, also noting that most of us are uncertain of the word's conceptual parameters.</em></p> <p><em>Thus, Ritvo explained, a major goal of the workshop was to "tease apart" this ambiguous "multivalent term." In doing so, presentations spanned disciplines from biology to anthropology, astrophysics, and literature, exploring topics such as "Wildness in the Microbial World," "Bewilderness," "Drawing Boundaries around the Wild," and "Domesticating the Wave." For a full account of all the conference discussions, read the <a href="" target="_blank">Call of the Wild Workshop Report</a>, prepared by Alison Laurence, a PhD candidate in the MIT doctoral program in History, Anthropology, and Science, Technology, and Society (HASTS)</em></p> <p><em>Ritvo, who teaches courses in British history, environmental history, the history of human-animal relations, and the history of natural history, is the author of four books, among them "<a href="" target="_blank">The Animal Estate</a>," recently named as one of the 100 most significant books published by Harvard University Press. She shared her thoughts recently about the concept "wild" with SHASS Communications.</em></p> <p><strong>Q</strong>: How can we tell whether or not something is "wild"?</p> <p><strong>A: </strong>This was the question that provided the underlying structure for the workshop, and there are no definitive answers — this is one reason that the workshop was so interesting.</p> <p>"Wild" is a very powerful category now, as it has been for many centuries. The emotional or ethical response to this power, however, has recently altered. That is to say, for most of history, to call something "wild" was to express disapproval, but the term has become sufficiently positive for the Shaw's supermarket chain to brand its "organic" product line as "Wild Harvest," described on its website as "created, flavored, and colored by nature." As wildness has come to seem less threatening and more threatened, people have come to like it better.</p> <p>Even when people agree about whether wildness is good or bad, they have often disagreed about exactly what it is, and about whether an individual organism or group of organisms or even an environment should be described as wild. Sometimes this divergence reflects shifting historical contexts (or lack of historical context). For example, the landscape of the English Lake District has often been characterized as wild, although it is the creation of many generations of sheep and shepherds. Similarly, the first European settlers to arrive in New England perceived the fruitful open woodlands as wild since they were unable to recognize the subtle yet productive management techniques of the people already living there.</p> <p>Sometimes the definition of "wild" reflects the lenses provided by disciplines. The workshop included participants representing the range of academic disciplines, from literary studies to astrophysics. Presentations focused on animals, plants, microorganisms, as well as the metaphoric extension of wildness to such entities as waves. They explored the reactions of non-specialists as well as of scientists and scholars.</p> <p>Unsurprisingly, it turns out that deciding whether a fungus is wild or domesticated is a very different process than making an analogous discrimination with regard to a cat. And cheese makers think differently about the wildness of the fungi that transform their milk than do truffle growers about the fungi that they attempt to coax from unpredictably recalcitrant trees. House cats may seem wilder to specialists in animal behavior than they do to historians or pet owners (or the reverse, depending on the cat).</p> <p><strong>Q</strong>: Do these alternative understandings of wildness have practical consequences?</p> <p><strong>A: </strong>The stakes involved in receiving — or being denied — designation as "wild" can be very high. Sometimes the rewards are merely economic (for the designators, not those designated). For example, the aesthetic cachet of wildness has inspired the development of domestic cat breeds that include small wild cats among their relatively distant forebears; thus the Savannah cat has the African serval in its family tree. Such breeds are much more expensive than breeds with no claims to exotic extraction or cats of no particular breed.</p> <p>But often being recognized as "wild" has farther-reaching consequences, especially in the realms of environmental conservation and species protection. It is much easier to garner political and financial support for the preservation of a landscape if it is described as "wild" or "virgin" or "pristine," even though most such claims are vulnerable to challenge. This is one reason that the national park movement in the United States and many other places (although not in Europe) began by trying to erase the signs of previous human occupation.</p> <p>Twenty years ago, the environmental historian William Cronon argued against this absolute understanding of "wilderness" in a <a href="" target="_blank">well-known essay</a> that has remained surprisingly controversial.</p> <p>Individual species can also be held to very demanding standards of purity. If their descent is suspected to include significant miscegenation, they can be rejected as candidates for protection or reintroduction. For example, the designation of the red wolf as an endangered species has been entangled with the question of whether it is a pure species or a hybrid of the gray wolf and the coyote.</p> <p>Similar discussions have swirled around "rewilding" projects — that is, both attempts to re-create vanished (preindustrial and preagricultural) landscapes, such as the Buffalo Commons proposed to replace the most arid portions of the Great Plains, and attempts to resurrect the extinct species that inhabited them, such as the aurochs, the ancestor of all extant domesticated cattle that once roamed the forests of Europe. Needless to say, such attempts raise a variety of political issues, as well as environmental and economic ones.</p> <p><strong>Q</strong>: Are such terms as "invasive species" useful in light of the broad spectrum of biological change that occurs continuously, with or without human engagement?</p> <p><strong>A: </strong>Perhaps because being recognized as wild can have such significant consequences, various gatekeeping designations have emerged. That is to say, people have tried to erect barriers to restrict the designation of wildness to animals or other organisms found worthy. In addition to demonstrating unsullied descent, protected species need to be perceived as indigenous or native. This requirement reflects the relative brevity of human historical imagination — or the sense that "wild" or "natural" ecosystems are somehow static — in either case, a reluctance to recognize that all terrestrial and marine environments have experienced radical change.</p> <p>The term "invasive" also tends to obscure the extent to which humans bear responsibility for the intrusive presence of such organisms. Thus the Australian brushtail possum is persecuted as invasive in New Zealand, where it was introduced over a century and a half ago in an attempt to establish a fur industry. The lionfish that have become common in the Caribbean were brought to Florida to stock home aquariums; like Japanese knotweed, many plants now targeted for extirpation were introduced to adorn home gardens. Other so-called invasions, such as those of purple loosestrife and zebra mussels, occurred as unintended side effects of global trade.</p> <p>On the other hand, not being sufficiently or conventionally domesticated can also put organisms at risk. Thus animals who lapse from their domesticated condition can be castigated as "feral" — as, occasionally, can be humans who lapse from their civilized condition. In Australia, the sense of "feral" has shifted to overlap significantly with "invasive," so that the Department of the Environment and Energy includes cane toads and red foxes in that category, along with cats, pigs, and camels who have slipped their chains.</p> <p>It is interesting that although humans are occasionally willing to castigate each other as "feral" or "wild," we are seldom inclined to characterize our species as "invasive."</p> <h5><em>Interview prepared by MIT SHASS Communications<br /> Editorial team: Emily Hiestand and Kathryn O'Neill</em><br /> <br /> &nbsp;</h5> The term “wild,” is receiving renewed attention from academics and popular authors alike. MIT historian Harriet Ritvo notes that even those who are critical of the term employ it frequently, and that most of us remain uncertain of the word's conceptual parameters.Photo collage: SHASS Communications Biology, Evolution, History, History of science, Humanities, Environment, Faculty, 3 Questions, Social sciences, Animals, SHASS, MISTI Celebrating pioneering women in STEM at the MIT Libraries For Ada Lovelace Day, a look at 10 women in STEM history from MIT’s rare books collection. Tue, 11 Oct 2016 00:00:00 -0400 Stephen Skuce | MIT Libraries <p><em>The MIT Libraries maintain a significant collection of rare books featuring works by pioneering scientists, mathematicians, and engineers from the past six centuries. To mark <a href="" target="_blank">Ada Lovelace Day</a> — an annual celebration of the history of women in the STEM fields — the libraries present a selection of MIT’s holdings by 10 noted women in STEM. The entries herein are inspired by the new <a href="" target="_blank">Big Names on Campus</a> blog, which highlights important works in STEM history within MIT’s collections. A slideshow below illustrates the variety of documents in the rare collections and in the stacks.&nbsp;</em></p> <p><strong>Maria Agnesi</strong><br /> Unlike most 18th-century parents, the mother and father of Italian mathematician <a href="" target="_blank">Maria Gaetana Agnesi</a> (1718-1799) were not typical for their time: They actively encouraged their daughter to study math and natural philosophy. Agnesi excelled and went on to become the first woman offered a professorship in mathematics at the University of Bologna<strong>.</strong> Her two-volume “Instituzioni Analitiche” (“Analytical Institutions,” 1748) was translated widely and became a standard university text. MIT owns the original Italian edition as well as the first English translation.</p> <p><strong>Hertha Ayrton</strong><br /> A British physicist, suffragist, feminist, and the first woman granted membership in the Institution of Electrical Engineers, <a href="" target="_blank">Hertha Ayrton</a> (1854-1923) wrote the definitive text on electric arcs, a breakdown of gas that creates ongoing electrical discharge. The Royal Society of London awarded her the prestigious Hughes Medal in 1906. MIT owns two copies of Ayrton's "The Electric Arc": One is from the libraries’ historic Vail Collection; the other appeared in the stacks.</p> <div class="cms-placeholder-content-slideshow"></div> <p><strong>Alice Ball</strong><br /> The life of chemist <a href="" target="_blank">Alice Ball</a> (1892-1916) was tragically brief, but rich with accomplishment. The first African-American — and the first woman — to receive an advanced degree from the University of Hawaii, she was published in the <em>Journal of the American Chemical Society</em> before she even began her graduate studies. Prior to her death at age 24, she devised a treatment that relieved the suffering of thousands with leprosy, a.k.a. Hansen's disease. MIT’s copy of her paper appeared in 1914.</p> <p><strong>Rachel Carson</strong><br /> American <a href="" target="_blank">Rachel Carson</a> (1907-1964) trained as a marine biologist and wrote widely on marine matters. But with the publication of “Silent Spring” (1962) — a first edition of which is in MIT’s collections — she became a household name. In response to her argument against unrestrained use of the pesticide DDT, she also became an enemy of the pesticide industry, even though a scientific advisory panel headed by MIT's Jerome Wiesner supported her findings. Carson's science withstood attacks, and the book was a sensation, giving rise to the modern ecology movement.</p> <p><strong>Émilie du Châtelet</strong><br /> Raised amid the court of Louis XIV, French scholar <a href="" target="_blank">Émilie du Châtelet</a> (1706-1749) was bred to be an ornament of society. Her charm and wit were indeed celebrated, but she made her real mark among the French intelligentsia. She wrote on physics and championed Isaac Newton before unreceptive Parisian academics. Her French translation of his “Principia” — and, more significantly, her in-depth commentary upon it — has never been supplanted. When the MIT Libraries established its first fund for rare book acquisitions in 2015, this was the initial purchase.</p> <p><strong>Marie Curie</strong><br /> Only four people have ever won the Nobel Prize twice, and the first to do it was Polish-French scientist <a href="" target="_blank">Marie Curie</a> (1867-1934). The 1903 physics prize was shared by Curie, her husband Pierre, and Henri Becquerel for their work on radiation. In 1911 Curie won the chemistry prize outright for her discovery of the elements radium and polonium. Her doctoral thesis (1903) was published commercially and translated widely; MIT owns copies in several languages.</p> <p><strong>Caroline Herschel</strong><br /> <a href="" target="_blank">Caroline Herschel</a> (1750-1848) received a minimal education and was treated as a housemaid by her parents in Germany. When she was 22, she traveled to England to join her brother, astronomer William Herschel, hoping for a career in music. Instead, she ended up assisting her brother, often performing menial tasks. Eventually though, she took to astronomy herself, discovering three nebulae and eight comets. Among other honors, the Royal Astronomical Society awarded Herschel its Gold Medal in 1828. MIT’s copy of a 1786 report on one of her comet discoveries includes a pullout with engravings of her observations.</p> <p><strong>Ada Lovelace</strong><br /> The creators of Ada, a programming language developed for the U.S. Department of Defense in the early 1980s, had good reasons for naming it after British mathematician <a href="" target="_blank">Ada Lovelace</a> (1815-1852). Like her, it's highly sophisticated and accomplishes a great deal. In 1843, Lovelace set out to translate a French article describing Charles Babbage's proposed "Analytical Engine," an early computing machine. She took matters much further, and when the translation was published with her extensive additions, the world had been presented with what many consider the first computer program. MIT owns an original copy of the translation, which includes her groundbreaking description of how the Analytical Engine would compute the Bernoulli number sequence.</p> <p><strong>Maria Mitchell</strong><br /> Massachusetts-born <a href="" target="_blank">Maria Mitchell</a> (1818-1889) was America's first professional female astronomer. Mitchell held several overlapping jobs: She was the Nantucket Atheneum's librarian; she computed ephemerides for the U.S. Nautical Almanac; and she was on the faculty of Vassar College from its inception. The first woman to earn membership in the American Academy of Arts and Sciences, Mitchell also received a gold medal from Frederick VI of Denmark for her discovery of a comet in 1847. Within MIT’s holdings is a hand-written letter from Mitchell to Emma Rogers, wife of MIT founder William Barton Rogers.</p> <p><strong>Mary Somerville</strong><br /> Like Émilie du Châtelet and Ada Lovelace, Scottish writer <a href="" target="_blank">Mary Somerville</a> (1780-1872) set out to do a translation and ended up creating original work that remains influential. Her first book was “Mechanism of the Heavens” (1831), a "common language" translation of Pierre-Simon Laplace's “Traité de Mécanique Céleste” that was swiftly adopted as a university textbook. Her “Preliminary Dissertation,” the book's preface, was so well-regarded that it was published separately as an introductory astronomy text — and MIT holds a copy of this publication from 1832. These and other important works were produced by a woman whose father had forbidden her from reading his books.</p> Pioneering women of science, tech, engineering, and math represented in the MIT Libraries' collections include: Ada Lovelace, Alice Ball, Hertha Ayrton, Marie Curie, Mary Somerville, Rachel Carson, Emelie du Chatelet, Caroline Herschel, Maria Agnesi, and Maria Mitchell.Portraits: Wikimedia Commons and Flickr. Book photos: Maia Weinstock/MITLibraries, Women in STEM, History, History of science, History of MIT, Technology and society, Women, Diversity, Diversity and inclusion Scene at MIT: Margaret Hamilton’s Apollo code A brief history of the famous 1969 photo of the software that sent humans to the moon. Wed, 17 Aug 2016 08:45:00 -0400 Maia Weinstock | MIT News Office <p>Half a century ago, MIT played a critical role in the development of the flight software for NASA’s Apollo program, which landed humans on the moon for the first time in 1969. One of the many contributors to this effort was Margaret Hamilton, a computer scientist who led the Software Engineering Division of the MIT Instrumentation Laboratory, which in 1961 <a href="" target="_blank">contracted with NASA</a> to develop the Apollo program’s guidance system. For her work during this period, Hamilton has been credited with popularizing the concept of software engineering.&nbsp;</p> <p>In recent years, a striking <a href=";module=people&amp;type=keyword&amp;x=0&amp;y=0&amp;kv=9507&amp;record=0&amp;media=3" target="_blank">photo</a> of Hamilton&nbsp;and her team’s Apollo code has made the rounds on social media and in articles detailing her <a href="" target="_blank">key contributions</a> to Apollo 11's success. According to Hamilton, this now-iconic image (at left, above)&nbsp;was taken at MIT in 1969 by a staff photographer for the Instrumentation Laboratory — later named the Draper Laboratory&nbsp;and today an independent organization — for use in promotion of the lab’s work on the Apollo project. The original caption, <a href="" target="_blank">she says</a>, reads:</p> <p>“Here, Margaret is shown standing beside listings of the software developed by her and the team she was in charge of, the LM [lunar module] and CM [command module] on-board flight software team.”</p> <p>Hamilton, now an independent computer scientist, <a href="" target="_self">described for MIT News</a> in 2009 her contributions to the Apollo software — which last month was <a href="" target="_blank">added in its entirety</a> to the code-sharing site GitHub:</p> <p>“From my own perspective, the software experience itself (designing it, developing it, evolving it, watching it perform and learning from it for future systems) was at least as exciting as the events surrounding the mission. … There was no second chance. We knew that. We took our work seriously, many of us beginning this journey while still in our 20s. Coming up with solutions and new ideas was an adventure. Dedication and commitment were a given. Mutual respect was across the board. Because software was a mystery, a black box, upper management gave us total freedom and trust. We had to find a way and we did. Looking back, we were the luckiest people in the world; there was no choice but to be pioneers.”</p> <p><em>Have a creative photo of campus life you'd like to share? <a href="">Submit it</a> to Scene at MIT.</em></p> Computer scientist Margaret Hamilton poses with the Apollo guidance software she and her team developed at MIT. Photos: MIT MuseumHistory of MIT, NASA, Scene at MIT, Staff, Moon, Computer science and technology, History of science, Technology and society, MIT Museum, Women in STEM, Space, astronomy and planetary science How MIT gave &quot;Ghostbusters&quot; its &quot;geek cred&quot; From proton packs to hidden props, the 2016 blockbuster draws upon MIT personalities and scientific panache. Fri, 15 Jul 2016 11:25:01 -0400 Meg Murphy | MIT News correspondent <p>The energetic researchers who grounded the new “<a href="" target="_blank">Ghostbusters</a>” in hard science — giving&nbsp;it “geek cred” — are using a flurry of media attention to alter public perceptions.</p> <p>Janet Conrad and Lindley Winslow, colleagues in the MIT Department of Physics and researchers in MIT's Lab for Nuclear Science, were key consultants for the all-female reboot of the classic 1984 supernatural comedy&nbsp;that is opening in theaters today. And the creative side of the STEM fields — science, technology, engineering, and mathematics —&nbsp;will be on full display.</p> <p>Creativity is, after all, a driving force at MIT, says Conrad. “MIT is like a giant sandbox.&nbsp;You can find a spot and start building your castle, and soon other people will come over to admire it and help.&nbsp;There is a sense that it is okay to think big and to play here that is really wonderful. Keeping in mind that I have an office full of physics toys, I feel like I fit right in.”</p> <p>MIT Chancellor Cynthia Barnhart, the first woman to hold the post, says it’s inspiring to see faculty members influence pop culture for the good. “At MIT, we know that being 'a geek'&nbsp;is cool. Movies like this have the potential to tell the whole world that. It’s such an important, powerful message for young people —&nbsp;especially women —&nbsp;to receive,” she says.</p> <p>Kristin Wiig’s character, Erin Gilbert, a no-nonsense physicist at Columbia University, is all the more convincing because of Conrad’s toys. Her office features demos and other actual trappings from Conrad’s workspace: books, posters, and scientific models. She even created detailed academic papers and grant applications for use as desk props.</p> <p>“I loved the original 'Ghostbusters,'” says Conrad. “And I thought the switch to four women, the girl-power concept, was a great way to change it up for the reboot. Plus I love all of the stuff in my office. I was happy to have my books become stars.”</p> <p>Conrad developed an affection for MIT while absorbing another piece of pop culture: "Doonesbury." She remembers one cartoon strip featuring a girl doing <a href="" target="_blank">Psets</a>. She is discouraged until a robot comes to her door and beeps. All is right with the world again. The exchange made an impression. “Only at MIT do robots come by your door to cheer you up,” she thought.</p> <p>Like her colleague, Winslow describes mainstream role models as powerful, particularly when fantasy elements in film and television enhance their childhood appeal. She, too, loved “Ghostbusters” as a kid. “I watched the original many times,” she recalls. “And my sister had a stuffed Slimer.”</p> <p>Winslow jokes that she “probably put in too much time” helping with the remake. Indeed, <em>Wired </em>magazine <a href="" target="_blank">recently detailed</a> that: “In one scene in the movie, Wiig’s Gilbert stands in front of a lecture hall, speaking on challenges of reconciling quantum mechanics with Einstein’s gravity. On the whiteboards, behind her, a series of equations tells the same story: a self-contained narrative, written by Winslow and later transcribed on set, illustrating the failure of a once-promising physics theory called SU(5).”</p> <p>Movie reviewers have been floored by the level of set detail. Also deserving of serious credit is James Maxwell, a postdoc&nbsp;at the Lab for Nuclear Science&nbsp;during the period he worked on "Ghostbusters." He is now a staff scientist at Thomas Jefferson National Accelerator Facility in Newport News, Virginia.</p> <p>Maxwell crafted realistic schematics of how proton packs, ghost traps, and other paranormal equipment might work. “I recalled myself as a kid, poring over the technical schematics of X-wings and Star Destroyers. I wanted to be sure that boys and especially girls of today could pore over my schematics, plug the components into Wikipedia, and find out about real tools that experimental physicists use to study the workings of the universe.”</p> <p>He too hopes this behind-the-scenes MIT link with a Hollywood blockbuster will get people thinking. “I hope that it shows a little bit of the giddy side of science and of MIT; the laughs that can come with a spectacular experimental failure or an unexpected break-through.”</p> <p>The movie depicts the worlds of science and engineering, as drawn from MIT, with remarkable conviction, says Maxwell. “So much of the feel of the movie, and to a great degree the personalities of the characters, is conveyed by the props,” he says.</p> <p>Kate McKinnon’s character, Jillian Holtzmann, an eccentric engineer, is nearly inseparable from, as Maxwell says, “a mess of wires and magnets and lasers” — a pile of equipment replicated from his MIT lab. When she talks proton packs, her lines are drawn from his work.</p> <p>Keep an eye out for treasures hidden in the props. For instance, Wiig’s character is the recipient of the Maria Goeppert Mayer “MGM Award” from the American Physical Society, which hangs on her office wall. Conrad and Winslow say the honor holds a special place in their hearts.</p> <p>“We both think MGM was inspirational. She did amazing things at a time when it was tough for women to do anything in physics,” says Conrad. “She is one of our favorite women in physics,” adds Winslow. Clearly, some of the film’s props and scientific details reflect their personal predilections but Hollywood — and the nation — is also getting a real taste of MIT.</p> Abby Yates (Melissa McCarthy, left), a gung-ho scientist with expertise in the paranormal, and Jillian Holtzmann (Kate McKinnon, right), an eccentric engineer, tinker in a nearly perfect replication of an MIT lab.Photo: Sony PicturesFilm and Television, Faculty, Physics, STEM education, maker movement, History of science, Women in STEM, School of Science, Nuclear science and engineering, School of Engineering Featured video: The MIT Telephone Banquet A century ago, MIT hosted what became the largest-ever transcontinental telephone circuit at an event marking the Institute&#039;s move to Cambridge. Tue, 14 Jun 2016 13:20:00 -0400 MIT News Office <div class="cms-placeholder-content-video"></div> <p>One hundred years ago today, on June 14, 1916, MIT and the American Telephone and Telegraph Company attempted the largest transcontinental telephone circuit of the time. People were skeptical when MIT alumni proposed a gathering of 1,500 in Boston's Symphony Hall that would link alumni from around the country with a complicated telephone circuit, especially since phones were still a novel technology.</p> <p>“You didn’t have telephone lines all over the country like you do today,” says Nora Murphy, archivist at MIT Institute Archives. “It was a very impressive display of technology.” The affair, which became known as the <a href="" target="_blank">Telephone Banquet</a>, was the culminating event in the dedication of MIT's new Cambridge campus.</p> <p>As MIT celebrates 100 years of its “<a href="" target="_self">New Technology</a>,” Murphy stresses the significance of the event: “This proved that MIT could do amazing things and could do something that nobody else had ever tried or accomplished before.”</p> <p><em>Submitted by: MIT Alumni Association | Video by: Brielle Domings | Duration: 3 minutes 34 seconds</em></p> MIT alumni telephone banquet of 1916Featured video, Century in Cambridge, History of MIT, Alumni/ae, History of science Q&amp;A: Rainer Weiss on LIGO’s origins MIT physicist developed the concept for LIGO as a teaching exercise. Thu, 11 Feb 2016 10:25:00 -0500 Jennifer Chu | MIT News Office <p><em>Scientists from MIT and Caltech, along with others around the world, have made the <a href="">first direct detection</a> of gravitational waves reaching the Earth, using an instrument known as LIGO (Laser Interferometer Gravitational-Wave Observatory). </em></p> <p><em>LIGO is a system of two identical detectors located 1,865 miles apart. Each detector consists of two 4-kilometer-long vacuum tubes, arranged in an L-shape, through which scientists send laser beams. As each beam reaches the end of a tube, it bounces off a mirror and heads back in the opposite direction. All things being equal, both laser beams should arrive back at their source at precisely the same time and, due to interference, cancel the light that would go to a photodetector. If, however, a gravitational wave passes through the detector, according to Albert Einstein’s predictions of 100 years ago the wave should stretch space in one tube while contracting the space in the other tube by an infinitesimal amount, thereby destroying the perfect cancellation and allowing light to reach the photodetector. </em></p> <p><em>On Sept. 14, 2015, the LIGO Scientific Collaboration detected a very slight disruption in both detectors. After careful analysis, the researchers have confirmed that the disruption is indeed a signal of gravitational waves, originating from the merging of two massive black holes 1.3 billion light years from Earth. </em></p> <p><em>Today, LIGO involves some 950 scientists at universities around the United States, including MIT, and in 15 other countries. But nearly four decades ago, the instrument was merely an MIT class exercise, conceived by Rainer Weiss, now a professor emeritus of physics. </em>MIT News<em> spoke with Weiss about the history of LIGO’s design and the 40-year effort to prove Einstein right. </em></p> <p><strong>Q: </strong>Where does the story of LIGO begin?</p> <p><strong>A:</strong> It started here in 1967. I was asked by the head of the teaching program in physics to give a course on general relativity. By the time 1967 had rolled around, general relativity had been relegated to mathematics departments. It was a theory of gravity, but it was mostly mathematics, and in most people’s minds it bore no relation to physics. And that was mostly because experiments to prove it were so hard to do — all these effects that Einstein’s theory had predicted were infinitesimally small.</p> <p>Einstein had looked at the numbers and dimensions that went into his equations for gravitational waves and said, essentially, “This is so tiny that it will never have any influence on anything, and nobody can measure it.” And when you think about the times and the technology in 1916, he was probably right.</p> <p>The big thing that’s happened in the last 100 years is, people discovered things in astronomy which were very different from what they knew in 1916 — tightly compact sources, enormously dense, like a neutron star, and black holes. And there was technology for doing precision measurements, because you had lasers, masers, electronics, computers, and a whole bunch of stuff people didn’t have in 1916.</p> <p>So the technology and knowledge of the universe made it possible by the time we got into this, to contemplate trying to look for gravitational waves.</p> <p>In the 1960s, Joseph Weber at the University of Maryland had the idea that maybe the technology had gotten to the point where you could look for gravitational waves, and he invented a method for doing that. He imagined a sort of xylophone made of a great big mass, called resonant bars. He expected a gravitational wave would come along and pull on one of the bars and squeeze it, and as the wave left, it would leave a pulse, and the thing would ring and you could hear it.</p> <p>It was the first idea that you should do something active to go look for gravitational waves. And he had claimed a discovery in the 1960s.</p> <p>When I taught my course, the students were very interested in finding out what this was. And I’ll be honest with you, I couldn’t for the life of me understand the thing he was doing. That was the problem. Because it completely countered every intuition I had now developed about general relativity. I couldn’t explain it to the students.</p> <p>That was my quandary at the time, and that’s when the invention was made. I said, “What’s the simplest thing I can think of to show these students that you could detect the influence of a gravitational wave?”</p> <p>The obvious thing to me was, let’s take freely floating masses in space and measure the time it takes light to travel between them. The presence of a gravitational wave would change that time. Using the time difference one could measure the amplitude of the wave. Equations for this process are simple to write and most of the students in the class could do it. Forget for a moment that this was a thought experiment requiring impossibly precise clocks. The principle was OK.</p> <p>I didn’t think much more of it until about a year later, when I began to realize something about Weber’s experiments — nobody was getting the answer he was getting. He had made a huge and powerful claim. And I began to realize, maybe this was wrong, and maybe even his idea of how it works was wrong.</p> <p>So I sat down one summer in a little room in Building 20, the “Plywood Palace” on Vassar Street, and worked the whole summer on the idea that I had talked with my students about. And knowing what you could do with lasers, I worked it out: Could you actually detect gravitational waves this way? And I came to the conclusion that yes, you could detect gravitational waves, at a strength that was much better than what Weber was looking for.</p> <p><strong>Q: </strong>What did it take to bring this idea into physical form?</p> <p><strong>A:</strong> We had been building a 1.5-meter prototype in RLE [the Research Laboratory of Electronics] using [military] funding, and were fairly well along. All at once funding was gone, due to the Mansfield amendment, which was a reaction to the Vietnam War. In the mind of the local RLE administrators, research in gravitation and cosmology was not in the military’s interest and support was given to solid-state physics, which was deemed more relevant. For the first time, I had to write proposals to other government and private agencies to continue our research.</p> <p>Nobody was seriously working on gravitational wave interferometry yet, although as I learned later, others had thought about it as well. A German group at the Max Planck Institute in Garching had just been through building a Weber bar. They had worked with the Italians and found that Weber was wrong. They had probably done the very best experiment of anybody to show this. That was the mid ’70s.</p> <p>They were asked to review my proposal to the National Science Foundation, just as they were thinking about the next thing to work on. They had been thinking, as many other groups in the world had at the time, to make even better Weber bars by cooling them to close to absolute zero. Instead they made a decision to try the interferometer idea. They called me to ask if there were any students that had been trained on the 1.5-meter prototype so that they could offer them a job. (At exactly the time they called there were none; a little later David Shoemaker, who had worked on the MIT prototype, did join the Garching group.) They then built a 3-meter prototype, got it working, and did a beautiful job.</p> <p>Next they built a 30-meter one. By the time I got funding from the NSF and got going again, the German group had really solved most of the technical problems of the idea, and shown that all the calculations I had done were right on the money, that it worked just as calculated. They also added some ideas of their own that made it better.</p> <p>A key step was in 1975: Because I was also doing studies in cosmic background radiation supported by NASA, I was asked by NASA to run a committee on uses of space research in the field of cosmology and relativity. What came out of that committee, for me, was that I met [Caltech physicist] Kip Thorne, whom I had asked to be a witness for the committee.</p> <p>I picked Kip up at the airport on a hot summer night when Washington, D.C., was filled with tourists. He did not have a hotel reservation so we shared a room for the night. Kip had developed one of the best theory groups in gravitation at Caltech and was thinking of bringing an experimental gravitation group there. We laid out on a big piece of paper all the different experiments one could build a new group around. I told him about this thing we were working on. He had never heard about it, and he got very interested. What came of it was that Kip and I eventually decided Caltech and MIT would do this [project that became LIGO] together.</p> <div> <p><strong>Q: </strong>This was an ambitious vision, with undoubtedly a long and complicated history. What were some key moments that drove the project forward?</p> <p><strong>A:</strong> In the later 1970s, the MIT group, now including Peter Saulson and Paul Linsay, did a study with industry to determine the feasibility of building a large, kilometer-scale gravitational wave interferometer. The study looked at how to make large vacuum systems and considered how to develop scaling laws for costs, the possible sites where one could build 5- to 10-kilometer L-shaped structures with minimum earth-moving, and the availability of optics and light sources. We looked at the possible sources of gravitational waves and several competing interferometer concepts that had been prototyped in different labs in the world. The information was put in a report, called the Blue Book, and presented to the NSF in 1983. Scientists from Caltech and MIT together presented the ideas developed in the Blue Book, as well as the results of the prototype research.</p> <p>The proposal we presented was to make a detector system sensitive enough to actually detect gravitational waves from an astrophysical source (not just a new prototype). The proposal was to build two detectors. You couldn’t do science with one; you had to have two separate detectors, equally sensitive, and long enough.</p> <p>That was a real struggle later on. You wanted to maintain those ideas, and people later wanted to shave it down: Why not just build one long one? Why build it so long? All these arguments were made but we stuck it out. We had to, otherwise we would never have survived and we wouldn’t be here today. We got an endorsement from the committee: risky research with the possibility of a profound outcome well worth considering as a new project by the NSF.</p> <p>By the mid 1980s, NSF kept trying to figure out how to start this. Then in 1986, an interesting thing happened that finally broke the logjam. Richard Garwin, who had worked with Enrico Fermi [1938 Nobel laureate in Physics] and with the Department of Energy, and made all the calculations and did the actual development of the first hydrogen bomb, had become chief scientist of IBM. He had read about Weber’s experiments and decided with another IBM associate to build a little one, much smarter than what Weber had built — and he saw nothing.</p> <div> <p>NSF was trying to sell this huge new program for gravitational waves. Garwin gets wind of it, and he thought he had slayed this dragon. He wrote a letter to the NSF saying, “If you’re going to persist with this, you’d better have a real study.”</p> <p>So we ran a study at the American Academy of Arts and Sciences on Beacon Street in Cambridge. It was a one-week meeting with an excellent committee of hard-nosed scientists. The recommendation the committee made was unbelievably good: The project is absolutely worth doing, don’t divide it up into one detector at a time, make it the full length, no more prototypes. It also recommended a change in the management of the project to have a single director rather than a steering group, which was the way we had managed the project.</p> <p>By 1989, we wrote another proposal under the direction of Rochus Vogt [a former Caltech provost], which took us almost six months to write — it was a masterpiece. The proposal was to build two sites with 4-kilometer-long interferometers. The interferometers were to be staged. The first detector was based on the research, now reasonably mature, from the prototypes with a sensitivity that offered a plausible chance for detection. The second detector was based on newer advanced concepts that had not yet been fully tested but that offered the capability of a good chance for detection. The proposal made it through the National Science Board, got accepted, and money started coming in significant amounts.</p> <p>By the 1990s, the rest of the history is easier. Now under the direction of Barry Barish [a Caltech physics professor] the sites were being built and developed, vacuum systems were made, and we started running the first detectors. By 2010, we had run and made vast improvements in their sensitivity but had seen nothing. It was a clean nothing; the detectors had run at design and we saw no anomalies that could be interpreted as gravitational waves. Based on the fact that we [had achieved our desired] design sensitivity and had carried out the science to determine some interesting upper limits on possible sources, we received funding to build Advanced LIGO.</p> </div> <p><strong>Q: </strong>How momentous for you is this discovery?</p> <p><strong>A:</strong> As far as having fulfilled the ambitions of a lot of us who have worked on this project, it is momentous. It’s the signal that all of us have wanted to see, because we knew about it, we had never had real proof of it, and it’s the limit of Einstein equations never observed before — the dynamics of the geometry of space-time in the strong [gravitational] field and high velocity limit.</p> <p>To me, it’s a closure to something which has had a very complicated history. The field equations and the whole history of general relativity have been complicated. Here suddenly we have something we can grab onto and say, “Einstein was right. What &nbsp;a marvelous insight and intuition he had.”</p> <p>I feel an enormous sense of relief and some joy, but mostly relief. There’s a monkey that’s been sitting on my shoulder for 40 years, and he’s been nattering in my ear and saying, “Ehhh, how do you know this is really going to work? You’ve gotten a whole bunch of people involved. Suppose it never works right?” And suddenly, he’s jumped off. It’s a huge relief.</p> </div> Rainer WeissPhoto: Bryce VickmarkAstronomy, Astrophysics, Physics, Research, History of science, MIT History, Faculty, Technology and society, School of Science, National Science Foundation (NSF), NASA, LIGO, space, Space, astronomy and planetary science