MIT News - Language - Linguistics MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community. en Thu, 05 Mar 2020 14:50:01 -0500 MIT senior Christine Soh integrates computer science and linguistics Knowledge in both a technical and humanistic field prepares her to make new tools in computational linguistics. Thu, 05 Mar 2020 14:50:01 -0500 School of Humanities, Arts, and Social Sciences <p>Christine Soh fell in love with MIT the summer before her senior year of high school while attending the Women’s Technology Program run by MIT’s Department of Electrical Engineering and Computer Science. That’s when she discovered that learning to program in Python is just like learning a new language — and Soh loves languages.<br /> <br /> Growing up in Colorado, Soh spoke both English and Korean; she learned French and Latin in school. This June, Soh will graduate from MIT, where she has happily combined her passions by majoring in computer science and engineering (Course 6-3) and linguistics (Course 24). She plans to begin working toward a PhD in linguistics next year.<br /> <br /> With fluency in both technical and humanistic modes of thinking, Soh exemplifies a "bilingual" perspective. "Dual competence is a good model for undergraduates at MIT," says engineer/historian David Mindell, who encourages MIT students to "master two fundamental ways of thinking about the world, one technical and one humanistic or social. Sometimes these two modes will be at odds with each other, which raises critical questions. Other times they will be synergistic and energizing."<br /> &nbsp;<br /> <strong>The challenge of natural language and computation</strong><br /> <br /> “The really cool thing about language is that it’s universal,” says Soh, who has added ancient Greek, Chinese, and the programming language Java to her credits since that summer. “I can have a really interesting conversation with anybody, even if they don’t have a linguistics background, because everyone has experience with language.”<br /> <br /> That said, natural language is difficult for computers to comprehend — something Soh finds fascinating. “It’s really interesting to think about how we understand language,” she says. “How is it that computers have such a hard time understanding what we find so easy?”<br /> <br /> <strong>Tools from computational linguistics to improve speech</strong><br /> <br /> Pairing linguistics with computer science has allowed Soh to explore cutting-edge research combining the two disciplines. Thanks to MIT’s Advanced Undergraduate Research Opportunities Program, Soh got the chance to explore whether speech analysis software can be used as a tool for the clinical diagnosis of speech impairments.</p> <p>“It’s very difficult to correctly diagnose a child because a speech impairment can be caused by a ton of different things,” says Soh. Working with the Speech Communication Group in MIT’s Research Laboratory of Electronics, Soh has been developing a tool that can listen to a child’s speech and extract linguistic information, such where in the mouth the sound was produced, thus identifying modifications from the proper formation of the word. “We can then use computational techniques to see if there are patterns to the modifications that have been made and see if these patterns can distinguish one underlying condition from another.”<br /> <br /> <strong>A natural leader</strong></p> <p>Even if the team isn’t able to find such patterns, Soh says the tool could be used by speech pathologists to learn more about what linguistic modifications a child might need to make to improve speech. In December, Soh presented a poster on this work at the annual meeting of the Acoustical Society of America and was honored with a first-place prize in her category (signal processing in acoustics).<br /> <br /> Exploring such real-world applications for computational linguistics helped inspire Soh to apply to doctoral programs in linguistics for next year. “I’ll be doing research that will be integrating computer science and linguistics,” she says, noting that possible applications of computational linguistics include working to improve speech-recognition software or to make machine-produced speech sound more natural. “I look forward to using the knowledge and skills I’ve learned at MIT in doing that research.”<br /> <br /> “Christine’s unique interests,&nbsp;energy, and deep interests in both linguistics and computer science should enable her to accomplish great things,” says Suzanne Flynn, a professor of linguistics who has had Soh as a student. “She is a natural leader.”<br /> &nbsp;<br /> <strong>From field methods to neurolinguistics</strong><br /> <br /> Looking back at her time at MIT, Soh recalls particularly enjoying two linguistics classes: 24.909 (Field Methods in Linguistics) which explores the structure of an unfamiliar language through direct work with a native speaker (in Soh’s year, the class centered on Wolof, which is spoken in Senegal, the Gambia, and Mauritania), and 24.906 (The Linguistic Study of Bilingualism).<br /> <br /> In the latter class, Soh says, “We looked at neurolinguistics, what’s happening in the brain as the bilingual brain developed. We looked at topics in sociolinguistics: In communities that are bilingual, like Quebec, what kind of impact does it have on society, such as how schools are run? … We got to see a spectrum of linguistics. It was really cool.”<br /> <br /> <strong>Building community at MIT</strong><br /> <br /> Outside class, Soh says she found community at MIT through the Asian Christian Fellowship and the Society of Women Engineers (SWE), which she served last year as vice president of membership. “SWE has also been a really awesome community and has opened up opportunities for conversation about what it means to be a woman engineer,” she says.<br /> <br /> Interestingly, Soh almost didn’t apply to MIT at all, simply because her brother was already at the Institute. (Albert Soh ’18 is now a high school teacher of math and physics.) Fortunately, the Women’s Technology Program changed her mind, and as she nears graduation, Soh says, "MIT has been absolutely fantastic.”<br /> &nbsp;</p> <h5><em>Story prepared by MIT SHASS Communications<br /> Editorial and Design Director: Emily Hiestand<br /> Senior Writer: Kathryn O'Neill</em><br /> &nbsp;</h5> Potential applications of Soh's work in computational linguistics include improving speech recognition software and making machine-produced speech sound more natural.Photo: Jon Sachs/MIT SHASS Communications School of Humanities Arts and Social Sciences, Electrical engineering and computer science (EECS), SuperUROP, Research Laboratory of Electronics, computer science, Linguistics, Students, Profile, Women in STEM, School of Engineering, MIT Schwarzman College of Computing QS World University Rankings rates MIT No. 1 in 12 subjects for 2020 Institute ranks second in five subject areas. Tue, 03 Mar 2020 19:01:01 -0500 MIT News Office <p>MIT has been honored with 12 No. 1 subject rankings in the QS World University Rankings for 2020.</p> <p>The Institute received a No. 1 ranking in the following QS subject areas: Architecture/Built Environment; Chemistry; Computer Science and Information Systems; Chemical Engineering; Civil and Structural Engineering; Electrical and Electronic Engineering; Mechanical, Aeronautical and Manufacturing Engineering; Linguistics; Materials Science; Mathematics; Physics and Astronomy; and Statistics and Operational Research.</p> <p>MIT also placed second in five subject areas: Accounting and Finance; Biological Sciences; Earth and Marine Sciences; Economics and Econometrics; and Environmental Sciences.</p> <p>Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.</p> <p>MIT has been ranked as the No. 1 university in the world by QS World University Rankings for eight straight years.</p> Afternoon light streams into MIT’s Lobby 7.Image: Jake BelcherRankings, Computer science and technology, Linguistics, Chemical engineering, Civil and environmental engineering, Mechanical engineering, Chemistry, Materials science, Mathematics, Physics, Economics, EAPS, Business and management, Accounting, Finance, DMSE, School of Engineering, School of Science, School of Architecture and Planning, Sloan School of Management, School of Humanities Arts and Social Sciences, Electrical Engineering & Computer Science (eecs), Architecture, Biology, Aeronautical and astronautical engineering “She” goes missing from presidential language Even when people believed Hillary Clinton would win the 2016 election, they did not use “she” to refer to the next president. Wed, 08 Jan 2020 01:00:20 -0500 Anne Trafton | MIT News Office <p>Throughout most of 2016, a significant percentage of the American public believed that the winner of the November 2016 presidential election would be a woman — Hillary Clinton.</p> <p>Strikingly, a new study from cognitive scientists and linguists at MIT, the University of Potsdam, and the University of California at San Diego shows that despite those beliefs, people rarely used the pronoun “she” when referring to the next U.S. president before the election. Furthermore, when reading about the future president, encountering the pronoun “she” caused a significant stumble in their reading.</p> <p>“There seemed to be a real bias against referring to the next president as ‘she.’ This was true even for people who most strongly expected and probably wanted the next president to be a female,” says Roger Levy, an MIT professor of brain and cognitive sciences and the senior author of the new study. “There’s a systematic underuse of ‘she’ pronouns for these kinds of contexts. It was quite eye-opening.”</p> <p>As part of their study, Levy and his colleagues also conducted similar experiments in the lead-up to the 2017 general election in the United Kingdom, which determined the next prime minister. In that case, people were more likely to use the pronoun “she” than “he” when referring to the next prime minister.</p> <p>Levy suggests that sociopolitical context may account for at least some of the differences seen between the U.S. and the U.K.: At the time, Theresa May was prime minister and very strongly expected to win, plus many Britons likely remember the long tenure of former Prime Minister Margaret Thatcher.</p> <p>“The situation was very different there because there was an incumbent who was a woman, and there is a history of referring to the prime minister as ‘she’ and thinking about the prime minster as potentially a woman,” he says.</p> <p>The lead author of the study is Titus von der Malsburg, a research affiliate at MIT and a researcher in the Department of Linguistics at the University of Potsdam, Germany. Till Poppels, a graduate student at the University of California at San Diego, is also an author of the paper, which appears in the journal <em>Psychological Science</em>.</p> <p><strong>Implicit linguistic biases</strong></p> <p>Levy and his colleagues began their study in early 2016, planning to investigate how people’s expectations about world events, specifically, the prospect of a woman being elected president, would influence their use of language. They hypothesized that the strong possibility of a female president might override the implicit bias people have toward referring to the president as “he.”</p> <p>“We wanted to use the 2016 electoral campaign as a natural experiment, to look at what kind of language people would produce or expect to hear as their expectations about who was likely to win the race changed,” Levy says.</p> <p>Before beginning the study, he expected that people’s use of the pronoun “she” would go up or down based on their beliefs about who would win the election. He planned to explore how long would it take for changes in pronoun use to appear, and how much of a boost “she” usage would experience if a majority of people expected the next president to be a woman.</p> <p>However, such a boost never materialized, even though Clinton was expected to win the election.</p> <p>The researchers performed their experiment 12 times between June 2016 and January 2017, with a total of nearly 25,000 participants from the Amazon Mechanical Turk platform. The study included three tasks, and each participant was asked to perform one of them. The first task was to predict the likelihood of three candidates winning the election — Clinton, Donald Trump, or Bernie Sanders. From those numbers, the researchers could estimate the percentage of people who believed the next president would be a woman. This number was higher than 50 percent during most of the period leading up to the election, and reached just over 60 percent right before the election.</p> <p>The next two tasks were based on common linguistics research methods — one to test people’s patterns of language production, and the other to test how the words they encounter affect their reading comprehension.</p> <p>To test language production, the researchers asked participants to complete a paragraph such as “The next U.S. president will be sworn into office in January 2017. After moving into the Oval Office, one of the first things that ….”</p> <p>In this task, about 40 percent of the participants ended up using a pronoun in their text. Early in the study period, more than 25 percent of those participants used “he,” fewer than 10 percent used “she,” and around 50 percent used “they.” As the election got closer, and Clinton’s victory seemed more likely, the percentage of “she” usage never went up, but usage of “they” climbed to about 60 percent. While these results indicate that the singular “they” has reached widespread acceptance as a de facto standard in contemporary English, they also suggest a strong persistent bias against using “she” in a context where the gender of the individual referred to is not yet known.</p> <p>“After Clinton won the primary, by late summer, most people thought that she would win. Certainly Democrats, and especially female Democrats, thought that Clinton would win. But even in these groups, people were very reluctant to use ‘she’ to refer to the next president. It was never the case that ‘she’ was preferred over ‘he,’” Levy says.</p> <p>For the third task, participants were asked to read a short passage about the next president. As the participants read the text on a screen, they had to press a button to reveal each word of the sentence. This setup allows the researchers to measure how quickly participants are reading. Surprise or difficulty in comprehension leads to longer reading times.</p> <p>In this case, the researchers found that when participants encountered the pronoun “she” in a sentence referring to the next president, it cost them about a third of a second in reading time — a seemingly short amount of time that is nevertheless known from sentence processing research to indicate a substantial disruption relative to ordinary reading — compared to sentences that used “he.” This did not change over the course of the study.</p> <p>“For months, we were in a situation where large segments of the population strongly expected that a woman would win, yet those segments of the population actually didn’t use the word ‘she’ to refer to the next president, and were surprised to encounter ‘she’ references to the next president,” Levy says.</p> <p><strong>Strong stereotypes</strong></p> <p>The findings suggest that gender biases regarding the presidency are so deeply ingrained that they are extremely difficult to overcome even when people strongly believe that the next president will be a woman, Levy says.</p> <p>“It was surprising that the stereotype that the U.S. president is always a man would so strongly influence language, even in this case, which offered the best possible circumstances for particularized knowledge about an upcoming event to override the stereotypes,” he says. “Perhaps it’s an association of different pronouns with positions of prestige and power, or it’s simply an overall reluctance to refer to people in a way that indicates they’re female if you’re not sure.”</p> <p>The U.K. component of the study was conducted in June 2017 (before the election) and July 2017 (after the election but before Theresa May had successfully formed a government). Before the election, the researchers found that “she” was used about 25 percent of the time, while “he” was used less than 5 percent of the time. However, reading times for sentences referring to the prime minister as “she” were no faster than than those for “he,” suggesting that there was still some bias against “she” in comprehension relative to usage preferences, even in a country that already has a woman prime minister.</p> <p>The type of gender bias seen in this study appears to extend beyond previously seen stereotypes that are based on demographic patterns, Levy says. For example, people usually refer to nurses as “she,” even if they don’t know the nurse’s gender, and more than 80 percent of nurses in the U.S. are female. In an ongoing study, von der Malsburg, Poppels, Levy, and recent MIT graduate Veronica Boyce have found that even for professions that have fairly equal representation of men and women, such as baker, “she” pronouns are underused.</p> <p>“If you ask people how likely a baker is to be male or female, it’s about 50/50. But if you ask people to complete text passages that are about bakers, people are twice as likely to use he as she,” Levy says. “Embedded within the way that we use pronouns to talk about individuals whose identities we don’t know yet, or whose identities may not be definitive, there seems to be this systematic underconveyance of expectations for female gender.”</p> <p>The research was funded by the National Institutes of Health, a Feodor Lynen Research Fellowship from the Alexander von Humboldt Foundation, and an Alfred P. Sloan Fellowship.</p> A new study reveals that although a significant percentage of Americans believed Hillary Clinton would win the 2016 presidential election, people rarely used the pronoun “she” when referring to the next president.Image: MIT NewsResearch, Brain and cognitive sciences, Linguistics, School of Science, Women, Behavior, Language, Politics, National Institutes of Health (NIH) MIT News Podcast: Build your own language (with transcript) Wed, 18 Dec 2019 00:00:00 -0500 MIT News Office <p><em>The following podcast and transcript are part of a feature on MIT's course 24.917 (ConLangs: How to Construct a Language). <a href="" target="_self">Read the accompanying article.</a></em></p> <p><iframe allow="autoplay" frameborder="no" height="166" scrolling="no" src=";color=%23ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;show_teaser=true" width="100%"></iframe></p> <p style="margin-left:1.5in;">FEMALE VOICE: All human beings are born free and equal in dignity and rights. We are endowed with reason and conscience to act. [Crosstalk] [Phrases in foreign languages]</p> <p style="margin-left:1.5in;">HOST: Language. We as human beings are surrounded by language all the time, whether we're reading, writing or speaking it. Language is embedded in our everyday. But what is language? What makes language a language, and not just a group of words, gestures, or sounds? By definition, language is the method of human communication, consisting of the use of words in a structured and conventional way. Simply put, language is how we interact with our world and with one another. But how does it work? And how do we as humans learn it?</p> <p style="margin-left:1.5in;">HOST: At the undergraduate level here at MIT, professor of linguistics Norvin Richards has asked his students to think about such questions and try to understand how human languages actually work by creating their own.</p> <p style="margin-left:1.5in;">ALYSSA WELLS-LEWIS: So my language is Dænikjə.</p> <p style="margin-left:1.5in;">SHILOH CURTIS: My language is called Xalate.</p> <p style="margin-left:1.5in;">JOSEPH NOSZEK:&nbsp;My language is Sowopuwuk.</p> <p style="margin-left:1.5in;&gt;JOSEPH NOSZEK: My language is called Sowopuwuk&lt;/p&gt; &lt;p style=">STUDENT: It's called Ehtokh.</p> <p style="margin-left:1.5in;">HOST:&nbsp; In his course, Constructed Languages, Professor Richards introduces students to the basics of linguistics such as phonetics (making sounds), morphology (forming words), and syntax (developing phrases) to assist them in their creations. But beyond that, they have free rein to develop a language of their choice and a story of the people who speak it.</p> <p style="margin-left:1.5in;">SHILOH CURTIS: For the first assignment we were supposed to make up like a back story for our languages, so mine is designed for the population of a generation starship, which is a spaceship that takes generations to reach another, like, habitable planet so you just have a society that will live on it for hundreds of years and just exist on the spaceship until they actually reach the planet. And I wanted my language to be sort of vaguely pronounceable by speakers of English, Russian, and Mandarin Chinese because I figure most people in the world especially that would be going on this starship would be able to speak one of these three languages.</p> <p style="margin-left:1.5in;">JOSEPH NOSZEK: My language is a language that's designed to be used as a torture device, to torture people by being insufferably, painfully, and inappropriately cute. The idea is basically there are only two vowels which are “oo” and “oh”, and using them a lot is maddening. [Laughs]</p> <p style="margin-left:1.5in;">JOSEPH NOSZEK: “Oo, ook sowopuwuk,” which means, “I speak Sowopuwuk.” “Oo dwong jowoong,” which is, “I eat fish.” “O dowa pudo kuta oouton,” which is, “you will buy a battery.” And the last one here is, “Oo dwong ovo oo ovo do so,” which is, “I ate an egg that was good.”</p> <p style="margin-left:1.5in;">HOST: Professor Richards, who received both his undergraduate and graduate degrees here at MIT designed this course as a fun and creative way to get students interested in linguistics. A self-proclaimed linguist who enjoys learning languages, Richards can speak and understand a handful of languages and has been knows to rattle off words from languages purposely designed, like Klingon, created for the “Star Trek” series, to make a linguistic point.</p> <p style="margin-left:1.5in;">NORVIN RICHARDS: In linguistics, what we are trying to do is to, to describe and understand completely everything it is that you know when you know how to use language, when you know how to speak, when you know how to understand, when you know how to sign if you're signing. How is it that you are able to do all of the very complicated things that we do when we speak and understand each other? How do you learn to do those things? And, and what is it exactly that you're manipulating when you manipulate language?</p> <p style="margin-left:1.5in;">NORVIN RICHARDS: So they spend the semester creating languages and at the end they have a mini grammar of a language that they've spent the semester creating. And they also have heard a lot of information about how the languages of the world work and how they don't work. Kinds of languages that exist and kinds of languages that as far as we know, don't exist. And whenever I say, "Here's a kind of thing that exists, and here, over here, these are kinds of languages that as far as we know don't exist," I get two kinds of students. There are students who say, "Okay, I will make a language that could be a normal human language," and you get other students who they hear me say, "No, no language ever does this." And they say, "That. That's what I want to do. I'll put that in my language."</p> <p style="margin-left:1.5in;">NORVIN RICHARDS:&nbsp; [crosstalk] Awesome, that sounds good, you need to have —</p> <p style="margin-left:1.5in;">LULU RUSSELL: So my language is called Lɵʌ. It's bimodal, which means you speak and sign at the same time. There's currently no existing language that does this, but my language you just speak words and use sign language at the same time to convey your meaning. Some of the signs you can hear because there are snaps and slaps. For example, saying I am speaking my language, is, “Nah Lɵʌ." So you can hear two of the signs there because there's two hits. But it just means "I am speaking with my language." And the signs that I did that you couldn't see were me using this personal pronoun I, um, as the subject which is the hit, and then with my language is another hit using a preposition.</p> <p style="margin-left:1.5in;">HOST: Throughout the semester, students get a unique opportunity to spend time in this intellectual space they may otherwise not tap into. But the languages have to work. They have to follow the rules. They have to make sense.</p> <p style="margin-left:1.5in;">NORVIN RICHARDS: So we do a lot of talking about ways in which languages are alike and ways in which languages are different and then what kinds of problems they have to solve in one way or another and different ways that languages solve them, different kinds of grammatical constructions that languages use. We talk about things that some languages do but others don't. Often during the course I say, "Okay, so here's a menu of things. You can choose one of these or you can make up your own. But you have to decide how your language, you know, does these things, which of these things it does."</p> <p style="margin-left:1.5in;">HOST: Students seem to pull inspiration for their languages from a variety of places. There is no shortage of individuality. Their languages are creative, complete, organized and extremely detailed.</p> <p style="margin-left:1.5in;">ALYSSA WELLS-LEWIS: I kind of really went in hard with the lore behind the language [laughs] but I'm a big fan of "Avatar: The Last Airbender" which is a, a TV show.</p> <p style="margin-left:1.5in;">AUDIO FROM AVATAR: Only the Avatar, master of all —</p> <p style="margin-left:1.5in;">ALYSSA WELLS-LEWIS: And so I just picked like, one of the creatures from that show and I was like, "Okay, I'm going to write a language for them." So I picked the buzzard wasps which is a mix between a vulture and a wasp. And so they have like a bird beak and then like the body of a wasp. And so I was thinking, like, in terms of the sounds that they'd be able to make, assuming that they have teeth, they would probably be able to make all the sounds except for the ones that use your lips. So it's a language that has a lot of t’s and b’s and very open vowels.</p> <p style="margin-left:1.5in;">ALYSSA WELLS-LEWIS: The way that you say, "I speak dænikjə," is “nee ho unok dænikjə.” Another one is, "I have food;" that is, “mee zanok foosh.” And the way that I kind of came up with the words is I kind of play around with what feels right, I guess. It's a very creative class, which I really, really, enjoy.</p> <p style="margin-left:1.5in;">HOST: The class, which debuted last year, is already one of the most popular classes offered in linguistics. And according to Richards, typically none of the students who take the class are linguistic majors. Rather, the course is populated with business students, chemists, computer scientists, and engineers.</p> <p style="margin-left:1.5in;">NORVIN RICHARDS: We get students who take the class because they want to spend some time doing something fun and creative, and maybe they hadn't thought much about language before but, they're interested in trying it.</p> <p style="margin-left:1.5in;">SHILOH CURTIS: I was sort of casually interested in linguistics before I got to MIT. I didn't know a whole lot about it but I was like: This is a topic that I want to explore some more if I get a chance. In my freshman spring I took intro to linguistics which happened to be taught by the same professor, Professor Richards, and I was like: Linguistics is awesome, and I love this professor. And I found out he was teaching this conlang class and I was like: Well, obviously I need to take this.</p> <p style="margin-left:1.5in;">HOST: In the field that exists at the intersection of science and the humanities, linguists try to understand exactly what goes on in the mind when we communicate and understand each other. For students, their study and understanding of language and how it works can spread well beyond the constraints of this class and be applied to other areas of study such as their major.</p> <p style="margin-left:1.5in;">JOSEPH NOSZEK: Civil environment engineering is my major but I'm in the core of systems engineering within that. The systems engineering is when you're looking at something that's, you know, has a lot of pieces, very big, has lot of data, and you're just have to try to make sense of it somehow and often you have to improve it. And I feel like there's a similarity that when you have a language you know, that's a system. There are a lot of parts, a lot of rules, a lot of words. There's already this sort of, like, systems perspective you can have on it of, like, ah, here's the system, how do I make my own sentences out of that?</p> <p style="margin-left:1.5in;">HOST: Besides assisting students in the creation of languages, Professor Richards also takes a strong interest in preserving languages in danger of fading away. He has spent decades of his career working with the Wampanoag people of Eastern Massachusetts as they attempt to revive their native language.</p> <p style="margin-left:1.5in;">NORVIN RICHARDS: Most of the world's languages are in danger of vanishing. Not the languages that you've heard of; not, you know, English or Spanish or French. Those are not going anywhere but if you count the languages of the world, which is hard to do, there are something like six or seven thousand languages in the world, and at least half of them are in danger of vanishing. How do you know when a language is in danger of vanishing? It comes in various degrees. Maybe the most extreme is there are languages that are only spoken by a few elderly people and no one is in the process of learning them now.</p> <p style="margin-left:1.5in;">NORVIN RICHARDS: Many of the indigenous languages of this country for example, are in that shape. Lots of the indigenous languages of Australia, there are lots of languages in Africa and various places in the world where many languages they're in trouble. I have the honor of being involved in the Wampanoag project, which is a project that attempts to do this for the language that was spoken here by the people who taught the Pilgrims how to survive, so the people who live on Cape Cod, the traditional owners of the place where we are now. And that language went through about a century of not being spoken by anyone at all but the Wampanoag are now attempting to revive its use so there are many texts in Wampanoag including a complete translation of the Bible. It's the first Bible that was published in this hemisphere, it was published here in Boston in the 1600s and many other documents, mostly legal documents, deeds and things like that.</p> <p style="margin-left:1.5in;">NORVIN RICHARDS: In a world where, you know, Native Americans in the larger culture and a lot of the time they're sort of relegated to, you know, sports mascots and Halloween costumes, you know. So to be able to say no, you know, you can dress like me and you can pretend to look like me, but I'm the only one who's me. And this is the way that we talk. That's an especially important thing for them to be able to say to the outside world. No, you know, this thing, this is mine and I'm the expert on it, you know. Me and the other people like me, we're the people who understand this and we get to decide to what extent we're going to share it with the outside world, but it's ours.</p> <p style="margin-left:1.5in;">HOST: Language. It unites us as a species because human communication is unique. Other animals communicate but as far as we know, it is uniquely human to create and use language. But different languages set people apart from each other. When we learn about the creation of specific languages, we learn about the people who made them and when we study what it takes to build any language, it helps us understand what it is to be human.</p> <p style="margin-left:1.5in;">HOST: Thanks for listening. You can find more audio content from MIT on Apple podcast, Google Play, or wherever you get your podcasts.</p> Podcasts, Language, Linguistics, Classes and programs, School of Humanities Arts and Social Sciences How to build a language MIT students are inventing constructed languages — or “conlangs” — in a class that uses linguistics to supply the building blocks. Tue, 17 Dec 2019 23:59:59 -0500 School of Humanities, Arts, and Social Sciences <p>Wouldn’t it be great if there were an exclamation designed specifically to use when your cellphone battery runs out of juice? Or a word that perfectly captures the idea of doing something for no reason?<br /> <br /> This semester, MIT students have been making up such words — but not for English or any other known language. They are constructing entirely new languages, or “conlangs,” in a class that uses linguistics, the science of language, to supply the necessary building blocks.<br /> <br /> One student, who took 24.917 (ConLangs: How to Construct a Language) this fall, created a language for underwater creatures who speak in shades of color. Another invented a language that combines speech with whistling. Senior Jessica Tang’s new language is for spaceships that speak. “It’s not a super logical premise,” she says, “but it's a lot of fun facing the constraints. And, I like a lot of the words in ‘spaceship-speak’ because they are just really weird.”<br /> <br /> Beyond imaginative premises, the challenge students take on in 24.917 is to create something that behaves in ways that are fundamentally different from the languages they already know. To achieve that, it’s useful to “understand something about how human languages actually work,” says Professor Norvin Richards, a linguistics scholar who teaches 24.917.<br /> <br /> Understanding how languages work is what the linguistics field is all about, and 24.917 provides a thorough introduction to the subject — including fundamental topics such as phonetics (making sounds), morphology (forming words), and syntax (developing phrases). The class, which debuted in 2018, has quickly become one of the most popular offered by MIT’s top-ranked linguistics program.</p> <p><iframe allow="autoplay" frameborder="no" height="166" scrolling="no" src=";color=%23ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;show_teaser=true" width="100%"></iframe></p> <p><span style="font-size:11px;"><em>In the above audio short, hear more from students and MIT professor of linguistics Norvin Richards about their work and the purpose of course 24.917. View a full transcript <a href="">here</a>.</em></span></p> <p><strong>Language and the mind</strong><br /> <br /> “One of the things you discover when you begin to learn about language is that there are all sorts of things that we do effortlessly, without thinking about it, but that are quite complicated,” Richards says. For example, English has quite a strict rule for ordering adjectives — it's always “a big red car,” never “a red big car.” New English learners routinely have to memorize this far-from-universal rule, while native speakers may not even be aware of it.<br /> <br /> “One of the goals of 24.917 is to show students some of what we know about how languages work thanks to all the work that’s been done in linguistics, which is the study of what exactly it is you know when you know a language,” Richards says.<br /> <br /> When asked to elaborate, Richards explains, “There are certain kinds of linguistic tasks that people seem to invariably accomplish in the same ways, no matter what language they speak.” Linguists endeavor to explain why that is. “A working hypothesis is that part of being a human being is having the kind of mind that allows you to construct and use language in certain ways but not others,” Richards says. “We're trying to discover what those properties of the human mind are; what kinds of creatures are human beings?”</p> <p><strong>Surprises</strong><br /> <br /> 24.917, which introduces students to some of the major quests of linguistics, is drawing many MIT undergraduate to explore the field more completely. Surprises abound.<br /> <br /> Joseph Noszek, a senior majoring in civil and environmental engineering, says he has found it fascinating to learn phonetics — including the International Phonetic Alphabet (IPA), a system for pronouncing unfamiliar words. “We started out talking about how you get sounds though points of articulation and how you can group consonants based on where your tongue is, what your lips are doing, and how much air you’re letting out,” Noszek says. With this information, plus some familiarity with the IPA, he has found it possible to produce sounds he wasn’t familiar with before. “I find it mind-blowing that there is a technique for this,” he says.<br /> <br /> Rebecca Sloan, a senior majoring in chemistry, echoed this sentiment, noting that students in 24.917 also watched speech videos recorded using magnetic resonance imaging (MRI), which enabled them to see how people used their speech organs to form sounds. “The most surprising thing for me in the class was being able to watch the MRIs of people saying words and realize that you can use that information to figure things out about different sounds,” she says.</p> <p><br /> <strong>From Swahili to Klingon</strong><br /> <br /> The class also provides a tour of world languages, as Richards demonstrates linguistic points using examples from Tagalog, Passamaquoddy, Thai, Korean, Swahili, Egyptian Arabic, O’odham, Dinka, and Welsh.<br /> <br /> Along the way, he even gives students some insight into the workings of two languages, Lardil and Wampanoag, in which Richards is a leading expert. For decades, Richards has worked with the Wampanoag people of Eastern Massachusetts as they have been successfully reviving their native language which, before the project began, had last been spoken in the 1800s. He has also spent years working to fight the obliteration of Lardil, an Aboriginal language once widely spoken on Mornington Island, Australia, but now nearly extinct.<br /> <br /> As Richards outlines various linguistic behaviors — such as the forming of plurals or systems of agreement — he often includes examples from these languages. But not surprisingly for a class on constructed languages, Richards also includes examples from languages that were purposely designed — notably Klingon, which was created for the “Star Trek” entertainment universe, and Quenya and Sindarin, two languages created by J.R.R. Tolkien for his “Lord of the Rings” novels. (Richards will easily rattle off a few words of Klingon to make a linguistic point, but claims he speaks the language only “very badly.”)<br /> <br /> “Klingon is useful in talking about morphology, which is the study of how we make words up out of pieces of words,” says Richards, noting that while English doesn’t have much morphology, Klingon does. It’s what is known as an “agglutinative” language, which means that it commonly forms new words by adding prefixes and even long strings of suffixes to root words. “It’s like a chemical reaction going on. You add these things, and words change from one thing to another.”</p> <p><strong>Tools for new languages </strong></p> <p>As students learn how various languages form tenses, plurals, and kinship terms, as well as how they borrow and shape words taken from other languages, they are gaining the tools to create entirely new languages. Richards says, “You present students with a little menu of the kinds of sounds you can make, and the students are picking and choosing and sometimes picking something that no language does.”<br /> <br /> Other new languages to emerge from the class include a language designed to sound like beatboxing; a language that combines speech with sign language, packing meaning into both sounds and gestures; and a language designed for alien beings who make sounds by tapping on their exoskeletons.<br /> <br /> “Our students get some idea of the kinds of things we work on in the linguistics field,” says Richards, "and then they come up with all kinds of wonderful stuff.”</p> <p><em>Story prepared by MIT SHASS Communications<br /> Editorial director: Emily Hiestand<br /> Senior writer: Kathryn O’Neill</em></p> Junior Alex Cuellar with his constructed language. The chalkboard reads: "I can speak Oafal."Image: Allegra BovermanSchool of Humanities Arts and Social Sciences, Linguistics, Language, Learning, Classes and programs, Students, Undergraduate, Education, teaching, academics Six MIT faculty elected 2019 AAAS Fellows Baggeroer, Flynn, Harris, Klopfer, Lauffenburger, and Leonard are recognized for their efforts to advance science. Tue, 26 Nov 2019 11:00:00 -0500 MIT News Office <p>Six MIT faculty members have been elected as fellows of the American Association for the Advancement of Science (AAAS)<em>.</em></p> <p>The new fellows are among a group of 443 AAAS members elected by their peers in recognition of their scientifically or socially distinguished efforts to advance science. This year’s fellows will be honored at a ceremony on Feb. 15, at the AAAS Annual Meeting in Seattle.</p> <p><a href="">Arthur B. Baggeroer</a> is a professor of mechanical, ocean and electrical engineering, the Ford Professor of Engineering, Emeritus, and an international authority on underwater acoustics. Throughout his career he made significant advances to geophysical signal processing and sonar technology, in addition to serving as a long-time intellectual resource to the U.S. Navy.</p> <p><a href="">Suzanne Flynn</a> is a professor of linguistics and language acquisition, and a leading researcher on the acquisition of various aspects of syntax by children and adults in bilingual, second- and third-language contexts. She also works on the neural representation of the multilingual brain and issues related to language impairment, autism, and aging.&nbsp;Flynn is currently editor-in-chief and a co-founding editor of&nbsp;<em>Syntax: A Journal of Theoretical, Experimental and Interdisciplinary Research</em>. &nbsp;</p> <p><a href="">Wesley L. Harris&nbsp;</a>is the Charles Stark Draper Professor of Aeronautics and Astronautics and has served as MIT associate provost and head of the Department of Aeronautics and Astronautics. His academic research program includes unsteady aerodynamics, aeroacoustics, rarefied gas dynamics, sustainment of capital assets, and chaos in sickle cell disease. Prior to coming to MIT, he was a NASA associate administrator, responsible for all programs, facilities, and personnel in aeronautics.</p> <p><a href="">Eric Klopfer</a> is a professor and head of the Comparative Media Studies/Writing program and the director of the Scheller Teacher Education Program and The Education Arcade at MIT. His interests range from the design and development of new technologies for learning to professional development and implementation in schools.&nbsp;Much of Klopfer’s research has focused on computer games and simulations for building understanding of science, technology, engineering, and mathematics.</p> <p><a href="">Douglas Lauffenburger</a>, is the Ford Professor of Biological Engineering, Chemical Engineering, and Biology. He and his research group investigate the interface of bioengineering, quantitative cell biology, and systems biology. The lab’s main focus has been on fundamental aspects of cell dysregulation, complemented by translational efforts in identifying and testing new therapeutic ideas.</p> <p><a href="">John J. Leonard</a> is the&nbsp;Samuel C. Collins Professor of Mechanical and Ocean Engineering and a leading expert in navigation and mapping for autonomous mobile robots. His research focuses on long-term visual simultaneous localization and mapping in dynamic environments. In addition to underwater vehicles, Leonard has applied his pursuit of persistent autonomy to the development of self-driving cars.</p> <p>This year’s fellows will be formally announced in the AAAS News and Notes section of <em>Science</em> on Nov. 28.</p> From left to right, top to bottom: Suzanne Flynn, Wesley L. Harris, Eric Klopfer, Douglas A. Lauffenburger, John J. Leonard, Arthur B. BaggeroerFaculty, School of Engineering, School of Humanities Arts and Social Sciences, Mechanical engineering, Linguistics, Aeronautics and Astronautics, Comparative Media Studies/Writing, Biological engineering, Awards, honors and fellowships Bot can beat humans in multiplayer hidden-role games Using deductive reasoning, the bot identifies friend or foe to ensure victory over humans in certain online games. Tue, 19 Nov 2019 23:59:59 -0500 Rob Matheson | MIT News Office <p>MIT researchers have developed a bot equipped with artificial intelligence that can beat human players in tricky online multiplayer games where player roles and motives are kept secret.</p> <p>Many gaming bots have been built to keep up with human players. Earlier this year, a team from Carnegie Mellon University developed the world’s first bot that can beat professionals in multiplayer poker. DeepMind’s AlphaGo made headlines in 2016 for besting a professional Go player. Several bots have also been built to beat professional chess players or join forces in cooperative games such as online capture the flag. In these games, however, the bot knows its opponents and teammates from the start.</p> <p>At the Conference on Neural Information Processing Systems next month, the researchers will present DeepRole, the first gaming bot that can win online multiplayer games in which the participants’ team allegiances are initially unclear. The bot is designed with novel “deductive reasoning” added into an AI algorithm commonly used for playing poker. This helps it reason about partially observable actions, to determine the probability that a given player is a teammate or opponent. In doing so, it quickly learns whom to ally with and which actions to take to ensure its team’s victory.</p> <p>The researchers pitted DeepRole against human players in more than 4,000 rounds of the online game “The Resistance: Avalon.” In this game, players try to deduce their peers’ secret roles as the game progresses, while simultaneously hiding their own roles. As both a teammate and an opponent, DeepRole consistently outperformed human players.</p> <p>“If you replace a human teammate with a bot, you can expect a higher win rate for your team. Bots are better partners,” says first author Jack Serrino ’18, who majored in electrical engineering and computer science at MIT and is an avid online “Avalon” player.</p> <p>The work is part of a broader project to better model how humans make socially informed decisions. Doing so could help build robots that better understand, learn from, and work with humans.</p> <p>“Humans learn from and cooperate with others, and that enables us to achieve together things that none of us can achieve alone,” says co-author Max Kleiman-Weiner, a postdoc in the Center for Brains, Minds and Machines and the Department of Brain and Cognitive Sciences at MIT, and at Harvard University. “Games like ‘Avalon’ better mimic the dynamic social settings humans experience in everyday life. You have to figure out who’s on your team and will work with you, whether it’s your first day of kindergarten or another day in your office.”</p> <p>Joining Serrino and Kleiman-Weiner on the paper are David C. Parkes of Harvard and Joshua B. Tenenbaum, a professor of computational cognitive science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory and the Center for Brains, Minds and Machines.</p> <p><strong>Deductive bot</strong></p> <p>In “Avalon,” three players are randomly and secretly assigned to a “resistance” team and two players to a “spy” team. Both spy players know all players’ roles. During each round, one player proposes a subset of two or three players to execute a mission. All players simultaneously and publicly vote to approve or disapprove the subset. If a majority approve, the subset secretly determines whether the mission will succeed or fail. If two “succeeds” are chosen, the mission succeeds; if one “fail” is selected, the mission fails. Resistance players must always choose to succeed, but spy players may choose either outcome. The resistance team wins after three successful missions; the spy team wins after three failed missions.</p> <p>Winning the game basically comes down to deducing who is resistance or spy, and voting for your collaborators. But that’s actually more computationally complex than playing chess and poker. “It’s a game of imperfect information,” Kleiman-Weiner says. “You’re not even sure who you’re against when you start, so there’s an additional discovery phase of finding whom to cooperate with.”</p> <p>DeepRole uses a game-planning algorithm called “counterfactual regret minimization” (CFR) — which learns to play a game by repeatedly playing against itself — augmented with deductive reasoning. At each point in a game, CFR looks ahead to create a decision “game tree” of lines and nodes describing the potential future actions of each player. Game trees represent all possible actions (lines) each player can take at each future decision point. In playing out potentially billions of game simulations, CFR notes which actions had increased or decreased its chances of winning, and iteratively revises its strategy to include more good decisions. Eventually, it plans an optimal strategy that, at worst, ties against any opponent.</p> <p>CFR works well for games like poker, with public actions — such as betting money and folding a hand — but it struggles when actions are secret. The researchers’ CFR combines public actions and consequences of private actions to determine if players are resistance or spy.</p> <p>The bot is trained by playing against itself as both resistance and spy. When playing an online game, it uses its game tree to estimate what each player is going to do. The game tree represents a strategy that gives each player the highest likelihood to win as an assigned role. The tree’s nodes contain “counterfactual values,” which are basically estimates for a payoff that player receives if they play that given strategy.</p> <p>At each mission, the bot looks at how each person played in comparison to the game tree. If, throughout the game, a player makes enough decisions that are inconsistent with the bot’s expectations, then the player is probably playing as the other role. Eventually, the bot assigns a high probability for each player’s role. These probabilities are used to update the bot’s strategy to increase its chances of victory.</p> <p>Simultaneously, it uses this same technique to estimate how a third-person observer might interpret its own actions. This helps it estimate how other players may react, helping it make more intelligent decisions. “If it’s on a two-player mission that fails, the other players know one player is a spy. The bot probably won’t propose the same team on future missions, since it knows the other players think it’s bad,” Serrino says.</p> <p><strong>Language: The next frontier</strong></p> <p>Interestingly, the bot did not need to communicate with other players, which is usually a key component of the game. “Avalon” enables players to chat on a text module during the game. “But it turns out our bot was able to work well with a team of other humans while only observing player actions,” Kleiman-Weiner says. “This is interesting, because one might think games like this require complicated communication strategies.”</p> <p>“I was thrilled to see this paper when it came out,” says Michael Bowling, a professor at the University of Alberta whose research focuses, in part, on training computers to play games. “It is really exciting seeing the ideas in DeepStack see broader application outside of poker. [DeepStack has] been so central to AI in chess and Go to situations of imperfect information. But I still wasn't expecting to see it extended so quickly into the situation of a hidden role game like Avalon. Being able to navigate a social deduction scenario, which feels so quintessentially human, is a really important step. There is still much work to be done, especially when the social interaction is more open ended, but we keep seeing that many of the fundamental AI algorithms with self-play learning can go a long way.”</p> <p>Next, the researchers may enable the bot to communicate during games with simple text, such as saying a player is good or bad. That would involve assigning text to the correlated probability that a player is resistance or spy, which the bot already uses to make its decisions. Beyond that, a future bot might be equipped with more complex communication capabilities, enabling it to play language-heavy social-deduction games — such as a popular game “Werewolf” —which involve several minutes of arguing and persuading other players about who’s on the good and bad teams.</p> <p>“Language is definitely the next frontier,” Serrino says. “But there are many challenges to attack in those games, where communication is so key.”</p> DeepRole, an MIT-invented gaming bot equipped with “deductive reasoning,” can beat human players in tricky online multiplayer games where player roles and motives are kept secret.Research, Computer science and technology, Algorithms, Video games, Artificial intelligence, Machine learning, Language, Computer Science and Artificial Intelligence Laboratory (CSAIL), Brain and cognitive sciences, Electrical Engineering & Computer Science (eecs), School of Engineering Learning about China by learning its language MIT senior&#039;s longstanding passion for Mandarin leads to a hands-on taste of the complexities of functioning in a Chinese business context. Fri, 11 Oct 2019 10:20:01 -0400 Lisa Hickler | MIT Global Languages <p>Among MIT students who didn’t grow up speaking Chinese, few are able to discuss “machine learning models” in passable Mandarin. But that is just what computer science and engineering senior Max Allen is able to do, and this ability comes as a result of academic work, stints abroad, an internship, and also just having the passion to learn Chinese.</p> <p>With China a growing economic powerhouse and leader in STEM, it is no wonder that more and more students are attracted to studying Chinese. Nationally, enrollments in Chinese classes are up, as they are at MIT.</p> <p>But for Max Allen, his interest was first piqued by a teacher’s visit to his eighth-grade class. Intrigued by the sound of the language and structure of the writing system, Allen started taking Chinese classes in high school. To him, learning the language was akin to a big puzzle whose solution is slowly revealed. And since Allen has always been fond of puzzles, he wanted to pursue this.</p> <p>After only two years of high-school language study, Allen spent his 11th-grade year living with a host family in Beijing and attending school through a program called School Year Abroad. Allen returned to the United States able to converse in Mandarin, and also more adept at fitting in culturally. He found that living with a family gives you a level of familiarity with people that is hard to achieve otherwise.</p> <p>Chinese has gradually occupied a greater and greater area of interest for Allen. Upon entering MIT, he decided to pursue a major in computer science and engineering (Course 6-3). After discovering that he could take Chinese to fulfill his humanities concentration requirements, Allen took Chinese V and VI, building on the work he did in high school. Even among MIT students who are known for high academic achievement, Chinese Lecturer Tong Chen noted that Allen stood out for his effort and seriousness.</p> <p>The more classes he took, and the more time he invested, the more Allen began to consider how Chinese might be part of his future academic and career paths.</p> <p>In spring 2018, Allen took “Business Chinese” as an elective concentration subject. Business Chinese helped Allen understand social dynamics and subtleties of social relations in a business setting in China, including how these express themselves in language. As Panpan Gao, the instructor of Business Chinese, explains, the pedagogical approach of the class emphasizes case studies: “Through case studies of multinational companies and introductions to crucial business issues in China, we try to help students better understand Chinese business culture and trends, and expand their language skills so that they can communicate effectively and professionally with Chinese speakers in the workplace.”</p> <p>The class really got Allen thinking about whether he might want to pursue jobs that would employ his knowledge of Chinese.</p> <p>Allen put his Chinese skills to good use the following summer. He took an engineering internship with Airbnb — on a team with a special focus on mitigating financial fraud coming from China. The team was mostly made up of Chinese nationals, and team members generally discussed work matters in Mandarin. To do business in China, the team would need to understand how to market the product to Chinese customers; how to build a secure platform; and how to build payment applications that are in line with expectations of Chinese consumer. This experience gave Allen a hands-on taste of the complexities of functioning in a Chinese business context.</p> <p>After the internship, Allen realized that to take his Chinese to the next level, he would need to put aside other academic pursuits for a period and spend more time studying the language in an immersive Chinese-speaking setting. He spent academic year 2018-2019 abroad studying Chinese: the fall in Taipei at the <a href="">International Chinese Language Program</a> of National Taiwan University, and the spring in Beijing at the <a href="">Inter-University Program for Chinese Language Studies</a> at Tsinghua University. Both programs are top Chinese language centers in the world and are intensive instructional programs with hours of work a day devoted to learning Mandarin. He particularly appreciated the intensive focus on conversation.</p> <p>While abroad, Allen found that when he ventured to out-of-the-way spots, he encountered curiosity from strangers who were less accustomed to seeing tourists. But when he demonstrated he could speak Chinese, people warmed up. “Speaking their native language helps to establish trust and rapport, which is important when they see you as just another outsider. But once a certain level of trust is established, people become more comfortable talking about meaningful things. And that's where the time investment of learning the language really pays off.”</p> <p>Now back at MIT for his senior year, Allen is considering how his multiple interests in computer science, international business, Chinese language, and cross-cultural communication skills might combine into a career path. The answer will take some time to untangle, but Allen is always up for the challenge of a big puzzle, and will remain open to the possibilities as he heads toward graduation.</p> MIT senior Max Allen (right) stands with Tsinghua University student Sean Chua in Beijing.Computer science and technology, China, Language, Students, Global Studies and Languages, Global, Profile, Business and management, Careers, School of Engineering, Classes and programs, School of Humanities Arts and Social Sciences, Electrical Engineering & Computer Science (eecs) Computing and artificial intelligence: Humanistic perspectives from MIT How the humanities, arts, and social science fields can help shape the MIT Schwarzman College of Computing — and benefit from advanced computing. Tue, 24 Sep 2019 00:00:00 -0400 School of Humanities, Arts, and Social Sciences <p><em>The MIT Stephen A. Schwarzman College of Computing </em><em>(SCC) </em><em>will reorient the Institute to bring the power of computing and artificial intelligence to all fields at MIT, and to allow the future of computing and AI to be shaped by all MIT disciplines.</em></p> <p><em>To support ongoing planning for the new college, Dean Melissa Nobles invited faculty from all 14 of MIT’s humanistic disciplines in the School of Humanities, Arts, and Social Sciences to respond to two questions:&nbsp;&nbsp; </em></p> <p><em>1) What domain knowledge, perspectives, and methods from your field should be integrated into the new MIT Schwarzman College of Computing, and why? </em><br /> <br /> <em>2) What are some of the meaningful opportunities that advanced computing makes possible in your field?&nbsp; </em></p> <p><em>As Nobles says in her foreword to the series, “Together, the following responses to these two questions offer something of a guidebook to the myriad, productive ways that technical, humanistic, and scientific fields can join forces at MIT, and elsewhere, to further human and planetary well-being.” </em></p> <p><em>The following excerpts highlight faculty responses, with links to full commentaries. The excerpts are sequenced by fields in the following order: the humanities, arts, and social sciences. </em></p> <p><strong>Foreword by Melissa Nobles, professor of political science and the Kenan Sahin Dean of the MIT School of Humanities, Arts, and Social Sciences </strong></p> <p>“The advent of artificial intelligence presents our species with an historic opportunity — disguised as an existential challenge: Can we stay human in the age of AI?&nbsp; In fact, can we grow in humanity, can we shape a more humane, more just, and sustainable world? With a sense of promise and urgency, we are embarked at MIT on an accelerated effort to more fully integrate the technical and humanistic forms of discovery in our curriculum and research, and in our habits of mind and action.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>Comparative Media Studies: William Uricchio, professor of comparative media studies</strong></p> <p>“Given our research and practice focus, the CMS perspective can be key for understanding the implications of computation for knowledge and representation, as well as computation’s relationship to the critical process of how knowledge works in culture — the way it is formed, shared, and validated.”</p> <p>Recommended action: “Bring media and computer scholars together to explore issues that require both areas of expertise: text-generating algorithms (that force us to ask what it means to be human); the nature of computational gatekeepers (that compels us to reflect on implicit cultural priorities); and personalized filters and texts (that require us to consider the shape of our own biases).” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>Global Languages: Emma J. Teng, the T.T. and Wei Fong Chao Professor of Asian Civilizations</strong></p> <p>“Language and culture learning are gateways to international experiences and an important means to develop cross-cultural understanding and sensitivity. Such understanding is essential to addressing the social and ethical implications of the expanding array of technology affecting everyday life across the globe.”</p> <p>Recommended action: “We aim to create a 21st-century language center to provide a convening space for cross-cultural communication, collaboration, action research, and global classrooms. We also plan to keep the intimate size and human experience of MIT’s language classes, which only increase in value as technology saturates the world.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>History: Jeffrey Ravel, professor of history and head of MIT History </strong></p> <p>“Emerging innovations in computational methods will continue to improve our access to the past and the tools through which we interpret evidence. But the field of history will continue to be served by older methods of scholarship as well; critical thinking by human beings is fundamental to our endeavors in the humanities.”</p> <p>Recommended action: “Call on the nuanced debates in which historians engage about causality to provide a useful frame of reference for considering the issues that will inevitably emerge from new computing technologies. This methodology of the history field is a powerful way to help imagine our way out of today’s existential threats.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>Linguistics: Faculty of MIT Linguistics</strong></p> <p>“Perhaps the most obvious opportunities for computational and linguistics research concern the interrelation between specific hypotheses about the formal properties of language and their computational implementation in the form of systems that learn, parse, and produce human language.”</p> <p>Recommended action: “Critically, transformative new tools have come from researchers at institutions where linguists work side-by-side with computational researchers who are able to translate back and forth between computational properties of linguistic grammars and of other systems.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>Literature: Shankar Raman, with Mary C. Fuller, professors of literature</strong></p> <p>“In the age of AI, we could invent new tools for reading. Making the expert reading skills we teach MIT students even partially available to readers outside the academy would widen access to our materials in profound ways.”</p> <p>Recommended action: At least three priorities of current literary engagement with the digital should be integrated into the SCC’s research and curriculum: democratization of knowledge; new modes of and possibilities for knowledge production; and critical analysis of the social conditions governing what can be known and who can know it.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>Philosophy: Alex Byrne, professor of philosophy and head of MIT Philosophy; and Tamar Schapiro, associate professor of philosophy</strong></p> <p>“Computing and AI pose many ethical problems related to: privacy (e.g., data systems design), discrimination (e.g., bias in machine learning), policing (e.g., surveillance), democracy (e.g., the&nbsp;Facebook-Cambridge Analytica data scandal), remote warfare, intellectual property, political regulation, and corporate responsibility.”</p> <p>Recommended action: “The SCC presents an opportunity for MIT to be an intellectual leader in the ethics of technology. The ethics lab we propose could turn this opportunity into reality.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>Science, Technology, and Society: Eden Medina and Dwaipayan Banerjee, associate professors of science, technology, and society</strong></p> <p>“A more global view of computing would demonstrate a broader range of possibilities than one centered on the American experience, while also illuminating how computer systems can reflect and respond to different needs and systems. Such experiences can prove generative for thinking about the future of computing writ large.”</p> <p>Recommended action: “Adopt a global approach to the research and teaching in the SCC, an approach that views the U.S. experience as one among many.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>Women's and Gender Studies: Ruth Perry, the Ann Friedlaender Professor of Literature; with Sally Haslanger, the Ford Professor of Philosophy, and Elizabeth Wood, professor of history</strong></p> <p>“The SCC presents MIT with a unique opportunity to take a leadership role in addressing some of most pressing challenges that have emerged from the role computing technologies play in our society — including how these technologies are reinforcing social inequalities.”</p> <p>Recommended action: “Ensure that women’s voices are heard and that coursework and research is designed with a keen awareness of the difference that gender makes. This is the single-most powerful way that MIT can address the inequities in the computing fields.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>Writing: Tom Levenson, professor of science writing </strong></p> <p>“Computation and its applications in fields that directly affect society cannot be an unexamined good. Professional science and technology writers are a crucial resource for the mission of new college of computing, and they need to be embedded within its research apparatus.”</p> <p>Recommended action: “Intertwine writing and the ideas in coursework to provide conceptual depth that purely technical mastery cannot offer.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>Music: Eran Egozy, professor of the practice in music technology</strong></p> <p>“Creating tomorrow’s music systems responsibly will require a truly multidisciplinary education, one that covers everything from scientific models and engineering challenges to artistic practice and societal implications. The new music technology will be accompanied by difficult questions. Who owns the output of generative music algorithms that are trained on human compositions? How do we ensure that music, an art form intrinsic to all humans, does not become controlled by only a few?”</p> <p>Recommended action: Through the SCC, our responsibility will be not only to develop the new technologies of music creation, distribution, and interaction, but also to study their cultural implications and define the parameters of a harmonious outcome for all.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>Theater Arts: Sara Brown, assistant professor of theater arts and MIT Theater Arts director of design</strong></p> <p>“As a subject, AI problematizes what is means to be human. There are an unending series of questions posed by the presence of an intelligent machine. The theater, as a synthetic art form that values and exploits liveness, is an ideal place to explore the complex and layered problems posed by AI and advanced computing.”</p> <p>Recommended action: “There are myriad opportunities for advanced computing to be integrated into theater, both as a tool and as a subject of exploration. As a tool, advanced computing can be used to develop performance systems that respond directly to a live performer in real time, or to integrate virtual reality as a previsualization tool for designers.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>Anthropology: Heather Paxson, the William R. Kenan, Jr. Professor of Anthropology</strong></p> <p>“The methods used in anthropology —&nbsp;a field that systematically studies human cultural beliefs and practices — are uniquely suited to studying the effects of automation and digital technologies in social life. For anthropologists, ‘Can artificial intelligence be ethical?’ is an empirical, not a hypothetical, question. Ethical for what? To whom? Under what circumstances?”</p> <p>Recommended action: “Incorporate anthropological thinking into the new college to prepare students to live and work effectively and responsibly in a world of technological, demographic, and cultural exchanges. We envision an ethnography lab that will provide digital and computing tools tailored to anthropological research and projects.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>Economics: Nancy L. Rose, the Charles P. Kindleberger Professor of Applied Economics and head of the Department of Economics; and David Autor, the Ford Professor of Economics and co-director of the MIT Task Force on the Work of the Future</strong></p> <p>“The intellectual affinity between economics and computer science traces back almost a century, to the founding of game theory in 1928. Today, the practical synergies between economics and computer science are flourishing. We outline some of the many opportunities for the two disciplines to engage more deeply through the new SCC.”</p> <p>Recommended action: “Research that engages the tools and expertise of economics on matters of fairness, expertise, and cognitive biases in machine-supported and machine-delegated decision-making; and on market design, industrial organization, and the future of work. Scholarship at the intersection of data science, econometrics, and causal inference. Cultivate depth in network science, algorithmic game theory and mechanism design, and online learning. Develop tools for rapid, cost-effective, and ongoing education and retraining for workers.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><strong>Political Science: Faculty of the Department of Political Science</strong></p> <p>“The advance of computation gives rise to a number of conceptual and normative questions that are political, rather than ethical in character. Political science and theory have a significant role in addressing such questions as: How do major players in the technology sector seek to legitimate their authority to make decisions that affect us all? And where should that authority actually reside in a democratic polity?”</p> <p>Recommended action: “Incorporate the research and perspectives of political science in SCC research and education to help ensure that computational research is socially aware, especially with issues involving governing institutions, the relations between nations, and human rights.” <a href="" target="_blank">Read more &gt;&gt;</a></p> <p><span style="font-size:11px;"><em>Series prepared by SHASS Communications<br /> Series Editor and Designer: Emily Hiestand<br /> Series Co-Editor: Kathryn O’Neill</em></span></p> Image: Christine Daniloff, MITEducation, teaching, academics, Humanities, Arts, Social sciences, Computer science and technology, Artificial intelligence, Technology and society, MIT Schwarzman College of Computing, Anthropology, School of Humanities Arts and Social Sciences, Comparative Media Studies/Writing, Economics, Global Studies and Languages, History, Linguistics, Literature, Music, Philosophy, Political science, Program in STS, Theater, Music and theater arts, Women's and Gender Studies Comparing primate vocalizations Study shows Old World monkeys combine items in speech — but only two and never more, unlike humans. Tue, 03 Sep 2019 12:18:40 -0400 Peter Dizikes | MIT News Office <p>The utterances of Old World monkeys, some of our primate cousins, may be more sophisticated than previously realized — but even so, they display constraints that reinforce the singularity of human language, according to a new study co-authored by an MIT linguist.&nbsp;</p> <p>The study reinterprets evidence about primate language and concludes that Old World monkeys can combine two items in a language sequence. And yet, their ability to combine items together seems to stop at two. The monkeys are not able to recombine language items in the same open-ended manner as humans, whose languages generate an infinite variety of sequences.</p> <p>“We are saying the two systems are fundamentally different,” says Shigeru Miyagawa, an MIT linguist and co-author of a new paper detailing the study’s findings.</p> <p>That might seem apparent. But the study’s precise claim — that even if other primates can combine terms, they cannot do so in the way humans do — emphasizes the profound gulf in cognitive ability between humans and some of our closest relatives.</p> <p>“If what we’re saying in this paper is right, there’s a big break between two [items in a sentence], and [the potential for] infinity,” Miyagawa adds. “There is no three, there is no four, there is no five. Two and infinity. And that is the break between a nonhuman primate and human primates.”</p> <p>The paper, “Systems underlying human and Old World monkey communications: One, two, or infinite,” is published today in the journal <em>Frontiers in Psychology</em>. The authors are Miyagawa, who is a professor of linguistics at MIT; and Esther Clarke, an expert in primate vocalization who is a member of the Behavior, Ecology, and Evolution Research (BEER) Center at Durham University in the U.K.</p> <p>To conduct the study, Miyagawa and Clarke re-evaluated recordings of Old World monkeys, a family of primates with over 100 species, including baboons, macaques, and the probiscis monkey.</p> <p>The language of some of these species has been studied fairly extensively. Research starting in the 1960s, for example, established that vervet monkeys have specific calls when they see leopards, eagles, and snakes, all of which requires different kinds of evasive action. Similarly, tamarin monkeys have one alarm call to warn of aerial predators and one to warn of ground-based predators.</p> <p>In other cases, though, Old World monkeys seem capable of combining calls to create new messages. The putty-nosed monkey of West Africa, for example, has a general alarm call, which scientists call “pyow,” and a specific alarm call warning of eagles, which is “hack.” Sometimes these monkeys combine them in “pyow-hack” sequences of varying length, a third message that is used to spur group movement.</p> <p>However, even these latter “pyow-hack” sequences start with “pyow” and end with “hack”; the terms are never alternated. Although these sequences vary in length and consequently can sound a bit different from each other, Miyagawa and Clarke break with some other analysts and think there is no “combinatorial operation” at work with putty-nosed monkey language, unlike the process through which humans rearrange terms. It is only the length of the “pyow-hack” sequence that indicates how far the monkeys will relocate.</p> <p>“The putty-nose monkey’s expression is complex, but the important thing is the overall length, which predicts behavior and predicts how far they travel,” Miyagawa says. “They start with ‘pyow’ and end up with ‘hack.’ They never go back to ‘pyow.’ Never.”</p> <p>As a result, Miyagawa adds, “Yes, those calls are made up of two items. Looking at the data very carefully it is apparent. The other thing that is apparent is that they cannot combine more than two things. We decided there is a whole different system here,” compared to human language.</p> <p>Similarly, Campbell’s monkey, also of West Africa, deploys calls that might be interpreted as evidence of human-style combination of language items, but which Miyagawa and Clarke believe are actually a simpler system. The monkeys make sounds rendered as “hok,” for an eagle alarm, and “krak,” for a leopard alarm. To each, they add an “-oo” suffix to turn those utterances into generalized aerial alarms and land alarms.</p> <p>However, that does not mean the Campbell’s monkey has developed a suffix as a kind of linguistic building block that could be part of a more open-ended, larger system of speech, the researchers conclude. Instead, its use is restricted to a small set of fixed utterances, none of which have more than two basic items in them.</p> <p>“It’s not the human system,” Miyagawa says. In the paper, Miyagawa and Clarke contend that the monkeys’ ability to combine these terms means they are merely deploying a “dual-compartment frame” which lacks the capacity for greater complexity.</p> <p>Miyagawa also notes that when the Old World monkeys speak, they seem to use a part of the brain known as the frontal operculum. Human language is heavily associated with Broca’s area, a part of the brain that seems to support more complex operations.</p> <p>If the interpretation of Old World monkey language that Miyagawa and Clarke put forward here holds up, then humans’ ability to harness Broca’s area for language may specifically have enabled them to recombine language elements as other primates cannot — by enabling us to link more than two items together in speech.&nbsp;</p> <p>“It seems like a huge leap,” Miyagawa says. “But it may have been a tiny [physiological] change that turned into this huge leap.”</p> <p>As Miyagawa acknowledges, the new findings are interpretative, and the evolutionary history of human language acquisition is necessarily uncertain in many regards. His own operating conception of how humans combine language elements follows strongly from Noam Chomsky’s idea that we use a system called “Merge,” which contains principles that not all linguists accept.</p> <p>Still, Miyagawa suggests, further analysis of the differences between human language and the language of other primates can help us better grasp how our unique language skills evolved, perhaps 100,000 years ago.</p> <p>“There’s been all this effort to teach monkeys human language that didn’t succeed,” Miyagawa notes. “But that doesn’t mean we can’t learn from them.”</p> A new study by an MIT linguist shows that the speech calls of some monkeys may be more sophisticated than realized, but are still far removed from the complexity of human language.Image: WikipediaSchool of Humanities Arts and Social Sciences, Linguistics, Evolution, Biology, Animals, Research Bridging the gap between research and the classroom MIT’s first-ever Science of Reading event brings together researchers and educators to discuss how to use research to improve literacy outcomes. Thu, 27 Jun 2019 13:50:01 -0400 Stefanie Koperniak | MIT Open Learning <p>In a moment more reminiscent of a Comic-Con event than a typical MIT symposium, Shawn Robinson, senior research associate at the University of Wisconsin at Madison, helped kick off the first-ever <a href="">MIT Science of Reading event</a> dressed in full superhero attire as Doctor Dyslexia Dude — the star of a graphic novel series he co-created to engage and encourage young readers, rooted in his own experiences as a student with dyslexia.&nbsp;</p> <p>The event, co-sponsored by the MIT Integrated Learning Initiative (MITili) and the McGovern Institute for Brain Research at MIT, took place earlier this month and brought together researchers, educators, administrators, parents, and students to explore how scientific research can better inform educational practices and policies — equipping teachers with scientifically-based strategies that may lead to better outcomes for students.</p> <p>Professor John Gabrieli, MITili director, explained the great need to focus the collective efforts of educators and researchers on literacy.</p> <p>“Reading is critical to all learning and all areas of knowledge. It is the first great educational experience for all children, and can shape a child’s first sense of self,” he said. “If reading is a challenge or a burden, it affects children’s social and emotional core.”</p> <p><strong>A great divide</strong></p> <p>Reading is also a particularly important area to address because so many American students struggle with this fundamental skill. More than six out of every 10 fourth graders in the United States are not proficient readers, and changes in reading scores for fourth and eighth graders have increased only slightly since 1992, according to the <a href="">National Assessment of Education Progress</a>.</p> <p>Gabrieli explained that, just as with biomedical research, where there can be a “valley of death” between basic research and clinical application, the same seems to apply to education. Although there is substantial current research aiming to better understand why students might have difficulty reading in the ways they are currently taught, the research often does not necessarily shape the practices of teachers — or how the teachers themselves are trained to teach.&nbsp;</p> <p>This divide between the research and practical applications in the classroom might stem from a variety of factors. One issue might be the inaccessibility of research publications that are available for free to all — as well as the general need for scientific findings to be communicated in a clear, accessible, engaging way that can lead to actual implementation. Another challenge is the stark difference in pacing between scientific research and classroom teaching. While research can take years to complete and publish, teachers have classrooms full of students — all with different strengths and challenges — who urgently need to learn in real time.</p> <p>Natalie Wexler, author of "The Knowledge Gap," described some of the obstacles to getting the findings of cognitive science integrated into the classroom as matters of “head, heart, and habit.” Teacher education programs tend to focus more on some of the outdated psychological models, like Piaget’s theory of cognitive development, and less on recent cognitive science research. Teachers also have to face the emotional realities of working with their students, and might be concerned that a new approach would cause students to feel bored or frustrated. In terms of habit, some new, evidence-based approaches may be, in a practical sense, difficult for teachers to incorporate into the classroom.</p> <p>“Teaching is an incredibly complex activity,” noted Wexler.</p> <p><strong>From labs to classrooms</strong></p> <p>Throughout the day, speakers and panelists highlighted some key insights gained from literacy research, along with some of the implications these might have on education.</p> <p>Mark Seidenberg, professor of psychology at the University of Wisconsin at Madison and author of "Language at the Speed of Sight," discussed studies indicating the strong connection between spoken and printed language.&nbsp;</p> <p>“Reading depends on speech,” said Seidenberg. “Writing systems are codes for expressing spoken language … Spoken language deficits have an enormous impact on children’s reading.”</p> <p>The integration of speech and reading in the brain increases with reading skill. For skilled readers, the patterns of brain activity (measured using functional magnetic resonance imaging) while comprehending spoken and written language are very similar. Becoming literate affects the neural representation of speech, and knowledge of speech affects the representation of print — thus the two become deeply intertwined.&nbsp;</p> <p>In addition, researchers have found that the language of books, even for young children, include words and expressions that are rarely encountered in speech to children. Therefore, reading aloud to children exposes them to a broader range of linguistic expressions — including more complex ones that are usually only taught much later. Thus reading to children can be especially important, as research indicates that better knowledge of spoken language facilitates learning to read.</p> <p>Although behavior and performance on tests are often used as indicators of how well a student can read, neuroscience data can now provide additional information. Neuroimaging of children and young adults identifies brain regions that are critical for integrating speech and print, and can spot differences in the brain activity of a child who might be especially at-risk for reading difficulties. Brain imaging can also show how readers’ brains respond to certain reading and comprehension tasks, and how they adapt to different circumstances and challenges.</p> <p>“Brain measures can be more sensitive than behavioral measures in identifying true risk,” said Ola Ozernov-Palchik, a postdoc at the McGovern Institute.&nbsp;</p> <p>Ozernov-Palchik hopes to apply what her team is learning in their current studies to predict reading outcomes for other children, as well as continue to investigate individual differences in dyslexia and dyslexia-risk using behavior and neuroimaging methods.</p> <p>Identifying certain differences early on can be tremendously helpful in providing much-needed early interventions and tailored solutions. Many speakers noted the problem with the current “wait-to-fail” model of noticing that a child has a difficult time reading in second or third grade, and then intervening. Research suggests that earlier intervention could help the child succeed much more than later intervention.</p> <p>Speakers and panelists spoke about current efforts, including <a href="">Reach Every Reader</a> (a collaboration between MITili, the Harvard Graduate School of Education, and the Florida Center for Reading Research), that seek to provide support to students by bringing together education practitioners and scientists.&nbsp;</p> <p>“We have a lot of information, but we have the challenge of how to enact it in the real world,” said Gabrieli, noting that he is optimistic about the potential for the additional conversations and collaborations that might grow out of the discussions of the Science of Reading event. “We know a lot of things can be better and will require partnerships, but there is a path forward.”</p> Shawn Robinson, senior research associate at the University of Wisconsin at Madison, helped kick off the first-ever MIT Science of Reading event dressed in full superhero attire as Doctor Dyslexia Dude — the star of a graphic novel series he co-created to engage and encourage young readers, rooted in his own experiences as a student with dyslexia. Photo: Christopher McIntoshOffice of Open Learning, McGovern Institute, Brain and cognitive sciences, School of Science, K-12 education, Education, teaching, academics, Learning, Language QS ranks MIT the world’s No. 1 university for 2019-20 Ranked at the top for the eighth straight year, the Institute also places first in 11 of 48 disciplines. Tue, 18 Jun 2019 20:01:00 -0400 MIT News Office <p>MIT has again been named the world’s top university by the QS World University Rankings, which were announced today. This is the eighth year in a row MIT has received this distinction.</p> <p>The full 2019-20 rankings — published by Quacquarelli Symonds, an organization specializing in education and study abroad — can be found at <a href=""></a>. The QS rankings were based on academic reputation, employer reputation, citations per faculty, student-to-faculty ratio, proportion of international faculty, and proportion of international students. MIT earned a perfect overall score of 100.</p> <p>MIT was also ranked the world’s top university in <a href="">11 of 48 disciplines ranked by QS</a>, as announced in February of this year.</p> <p>MIT received a No. 1 ranking in the following QS subject areas: Chemistry; Computer Science and Information Systems; Chemical Engineering; Civil and Structural Engineering; Electrical and Electronic Engineering; Mechanical, Aeronautical and Manufacturing Engineering; Linguistics; Materials Science; Mathematics; Physics and Astronomy; and Statistics and Operational Research.</p> <p>MIT also placed second in six subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Earth and Marine Sciences; Economics and Econometrics; and Environmental Sciences.</p> Image: Christopher HartingRankings, Architecture, Chemical engineering, Chemistry, Civil and environmental engineering, Electrical Engineering & Computer Science (eecs), Economics, Linguistics, Materials Science and Engineering, DMSE, Mechanical engineering, Aeronautical and astronautical engineering, Physics, Business and management, Accounting, Finance, Arts, Design, Mathematics, EAPS, School of Architecture and Planning, School of Humanities Arts and Social Sciences, School of Science, School of Engineering, Sloan School of Management Teaching language models grammar really does make them smarter Researchers submit deep learning models to a set of psychology tests to see which ones grasp key linguistic rules. Wed, 29 May 2019 14:00:01 -0400 Kim Martineau | MIT Quest for Intelligence <p>Voice assistants like Siri and Alexa can tell the weather and crack a good joke, but any 8-year-old can carry on a better conversation.</p> <p>The deep learning models that power Siri and Alexa learn to understand our commands by picking out patterns in sequences of words and phrases. Their narrow, statistical understanding of language stands in sharp contrast to our own creative, spontaneous ways of speaking, a skill that starts&nbsp;developing even before we are born,&nbsp;while we're&nbsp;still in the womb.&nbsp;</p> <p>To give computers some of our innate feel for language, researchers have started training deep learning models on the grammatical rules that most of us grasp intuitively, even if we never learned how to diagram a sentence in school. Grammatical constraints seem to help the models learn faster and perform better, but because neural networks reveal very little about their decision-making process, researchers have struggled to confirm that the gains are due to the grammar, and not the models’ expert ability at finding patterns in sequences of words.&nbsp;</p> <p>Now psycholinguists have stepped in to help. To peer inside the models, researchers have taken psycholinguistic tests originally developed to study human language understanding and adapted them to probe what neural networks know about language. In a pair of papers to be presented in June at the&nbsp;<a href="">North American Chapter of the Association for Computational Linguistics</a>&nbsp;conference, researchers from MIT, Harvard University, University of California, IBM Research, and Kyoto University have devised a set of tests to tease out the models’ knowledge of specific grammatical rules. They find evidence that grammar-enriched deep learning models comprehend some fairly sophisticated rules, performing better than models trained on little-to-no grammar, and using a fraction of the data.</p> <p>“Grammar helps the model behave&nbsp;in more human-like ways,” says&nbsp;<a href="">Miguel Ballesteros</a>, an IBM researcher with the&nbsp;<a href="">MIT-IBM Watson AI Lab</a>, and co-author of both studies. “The sequential models don’t seem to care if you finish a sentence with a non-grammatical phrase. Why? Because they don’t see that hierarchy.”</p> <p>As a postdoc at Carnegie Mellon University, Ballesteros helped develop a method for training modern language models on sentence structure called&nbsp;<a href="">recurrent neural network grammars</a>, or RNNGs. In the current research, he and his colleagues exposed the RNNG model, and similar models with little-to-no grammar training, to sentences with good, bad, or ambiguous syntax. When human subjects are asked to read sentences that sound grammatically off, their surprise is registered by longer response times. For computers, surprise is expressed in probabilities; when low-probability words appear in the place of high-probability words, researchers give the models a higher surprisal score.</p> <p>They found that the best-performing model —&nbsp;the grammar-enriched RNNG model&nbsp;— showed greater surprisal when exposed to grammatical anomalies; for example, when the word “that” improperly appears instead of “what” to introduce an embedded clause; “I know what the lion devoured at sunrise” is a perfectly natural sentence, but “I know that the lion devoured at sunrise” sounds like it has something missing — because it does.</p> <p>Linguists call this type of construction a dependency between a filler (a word like&nbsp;who or&nbsp;what) and a gap (the absence of a phrase where one is typically required). Even when more complicated constructions of this type are shown to grammar-enriched models, they — like native speakers of English — clearly know which ones are wrong.&nbsp;</p> <p>For example, “The policeman who the criminal shot the politician with his gun shocked during the trial”<em> </em>is anomalous; the gap corresponding to the filler “who” should come after the&nbsp;verb,&nbsp;“shot,”<em>&nbsp;</em>not&nbsp;“shocked.”<em>&nbsp;</em>Rewriting the sentence to change the position of the gap, as in “The policeman who the criminal shot with his gun shocked the jury during the trial,”<em>&nbsp;</em>is longwinded, but perfectly grammatical.</p> <p>“Without being trained on tens of millions of words, state-of-the-art sequential models don’t care where the gaps are and aren’t in sentences like those,” says&nbsp;<a href="">Roger Levy</a>, a professor in MIT’s&nbsp;<a href="">Department of Brain and Cognitive Sciences</a>, and co-author of both studies. “A human would find that really weird, and, apparently, so do grammar-enriched models.”</p> <p>Bad grammar, of course, not only sounds weird, it can turn an entire sentence into gibberish, underscoring the importance of syntax in cognition, and to psycholinguists who study syntax to learn more about the brain’s capacity for symbolic thought.“Getting the structure right is important to understanding the meaning of the sentence and how to interpret it,” says&nbsp;<a href="">Peng Qian</a>, a graduate student at MIT and co-author of both studies.&nbsp;</p> <p>The researchers plan to next run their experiments on larger datasets and find out if grammar-enriched models learn new words and phrases faster. Just as submitting neural networks to psychology tests is helping AI engineers understand and improve language models, psychologists hope to use this information to build better models of the brain.&nbsp;</p> <p>“Some component of our genetic endowment gives us this rich ability to speak,” says&nbsp;<a href="">Ethan Wilcox</a>, a graduate student at Harvard and co-author of both studies. “These are the sorts of methods that can produce insights into how we learn and understand language when our closest kin cannot.”</p> In a pair of studies, researchers show that grammar-enriched deep learning models understand some key rules about language use. Peng Qian (left) and Ethan Wilcox, graduate students at MIT and Harvard University respectively, presented the work at a recent MIT-IBM Watson AI Lab poster session. Photo: Kim MartineauMIT Quest for Intelligence, MIT-IBM Watson AI Lab, Brain and cognitive sciences, School of Science, Artificial intelligence, Algorithms, Computer modeling, Computer science and technology, Linguistics, Machine learning Merging machine learning and the life sciences Through computing, senior and Marshall Scholar Anna Sappington seeks answers to biological questions. Sat, 18 May 2019 23:59:59 -0400 Gina Vitale | MIT News correspondent <p>Anna Sappington’s first moments of fame came when she was a young girl, living in a home so full of pets she calls it a zoo. She grew up on the Chesapeake Bay, surrounded by a lush environment teeming with wildlife, and her father was an environmental scientist. One day, when she found a frog in a skip laurel bush, she named him Skippy and built him a habitat. Later on, she and Skippy appeared on the Animal Planet TV special “What’s to Love About Weird Pets?”</p> <p>Now a senior majoring in computer science and molecular biology, Sappington has been chosen for another prestigious honor: She’s one of five MIT students selected this year to be Marshall Scholars. She chose to study computer science because she wanted to have a role in pulling apart and understanding data, and she chose biology because of her lifelong fascination with nature, cells, genetic inheritance — and, of course, Skippy.</p> <p>“My interests have grown and expanded in different ways, but they’re still kind of rooted in this natural dual passion that I have for both of these fields,” she says.</p> <p><strong>An eye for genomic research</strong></p> <p>When Sappington came to MIT, it was right after her first summer internship at the National Institutes of Health, where she examined genes that could be related to increased risk of cardiovascular disease. It was her first experience working with data on human patients, and it inspired her to continue working in medical research.</p> <p>When she was a first-year student, Sappington spent the year at the Koch Institute, working with a graduate student to determine how liver cells respond to infection by hepatitis B virus. The summer after that, she went back to the NIH to contribute to a different project. This one still involved human health data, but it was more focused on building a computational tool. Sappington helped develop an algorithm that would quickly calculate how similar two genomes or proteins were to each other, a technology that could be used to screen for different bacteria strains in real-time.</p> <p>“I wanted to kind of get my feet wet in all the different kinds of ways computer science and biology and human health can interact,” she says.</p> <p>Since her return from the NIH at the beginning of her sophomore year, Sappington has been working in Aviv Regev’s lab in the Broad Institute of MIT and Harvard. She says Regev, a professor in the MIT Department of Biology, has been nothing short of an inspiration.</p> <p>“She herself is just an incredible role model for the world of computational biology,” Sappington says.</p> <p>The main initiative of Regev’s lab is an initiative called the Human Cell Atlas, which was recently named <em>Science’s</em> Breakthrough of the Year. It’s like a layer on top of the Human Genome Project, she says. They are working to identify and catalogue the different types of cells, such as skin cells and lung cells. The need for the cataloging comes from the fact that even though these cells have the exact same DNA genome, they have different specialized functions, and therefore can’t be identified by genome alone.</p> <p>“Within a given tissue, like your skin tissue, cells are actually like a whole collage of different molecular profiles in how they express their genes,” she says. “So while the underlying genome is the same, there’s all sorts of other factors that make your cells express those genes — which turn into proteins — differently.”</p> <p>Because the human body contains so many different types of cells, teams of researchers work on different pieces. Sappington works on data analysis as part of a team that is classifying retinal cells. It’s a unique challenge, she says, because the retina has more than 40 different types of cells, all of which respond to disease in different ways. While still chipping away at human retinal cell types, her team contributed to a recently published retinal cell atlas for the macaque monkey. For her undergraduate research career, Sappington was named a 2018-2019 Goldwater Scholar.</p> <p><strong>Dancing, speaking, leading</strong></p> <p>Before coming to MIT, Sappington had never been involved in dancing. But after she saw a showcase by the <a href="">Asian Dance Team</a> her first year, she decided to give it a try. After a few semesters dancing with ADT, Sappington also joined MIT DanceTroupe, where she found the culture to be creative, supportive, and incredibly fun.</p> <p>“[I] just really fell in love with the community, and the general community of dancers at MIT,” she says.</p> <p>Dance wasn’t the only aspect of the arts and humanities at MIT that she loved. She is also a part of the Burchard Scholars program, which allows students with a particular interest in the humanities to explore that topic. After she took a linguistics class with Professor David Pesetsky her first year, that field became her official humanities concentration. She ended up taking the next level of that class, which centered around syntax, and then she and five other students later created their own special subject class on linguistics.</p> <p>“Essentially linguistics is the study of how language as a whole works, and the underlying rules that govern it,” she says. “It interfaces with brain and cognitive science, and even computer science, and how language is learned and acquired.”</p> <p>Outside of class, Sappington has also been involved with <a href="">TechX</a>, a student-run organization that is responsible for many of MIT’s tech-related events, including HackMIT. Events also include the makeathon MakeMIT, the spring career fair and technology demo xFair, and high school mentoring program THINK. After serving on and running an event committee, Sappington served as the overall director for TechX in her junior year. While she’s no longer in charge, she’s still grateful to be part of the team.</p> <p>“The whole thing was like one big family. … Each committee has its own intercommittee pride with the event that they run, but then everyone also has to rely on each other,” she says.</p> <p><strong>Machine learning across the pond</strong></p> <p>After graduation, Sappington will be heading off to University College London to earn her MS in machine learning. Her goal is to explore machine learning in a context that isn’t biology, so that she can learn new and different approaches that she might later be able to apply to biological challenges. The second year of her Marshall Scholarship will be spent at Cambridge University, where she will do a full year of research, likely involving machine learning applied to health care or other biological questions.</p> <p>Her ultimate goal is to find new and better ways to use machine learning and technology to improve the health care system. To that end, she aims to get her MD/PhD after the next two years in England. After volunteering at the Massachusetts General Hospital and shadowing doctors in the Boston area, Sappington is pretty certain she wants a career where she can interact with patients while still being involved with computer science and biology. She’s excited to move forward with the next chapter of her life — but when it comes to leaving MIT, she’s got understandably mixed feelings.</p> <p>“I think no matter where I would be going after graduation, it’s bittersweet to leave the incredible community that is the MIT community,” she says.</p> Anna SappingtonImage: Ian MacLellanElectrical Engineering & Computer Science (eecs), Biology, Health, Medicine, Computer science and technology, student, Undergraduate, Profile, Arts, Koch Institute, Broad Institute, Linguistics, School of Engineering, School of Science, School of Humanities Arts and Social Sciences Can science writing be automated? A neural network can read scientific papers and render a plain-English summary. Wed, 17 Apr 2019 23:59:59 -0400 David L. Chandler | MIT News Office <p>The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand.</p> <p>Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two.</p> <p>Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists scan a large number of papers to get a preliminary sense of what they’re about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition.</p> <p>The <a href="" target="_blank">work is described</a> in the journal <em>Transactions of the Association for Computational Linguistics</em>, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a principal scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at <em>New Scientist</em> magazine.</p> <p><strong>From AI for physics to natural language</strong></p> <p>The work came about as a result of an unrelated project, which involved developing new artificial intelligence approaches based on neural networks, aimed at tackling certain thorny problems in physics. However, the researchers soon realized that the same approach could be used to address other difficult computational problems, including natural language processing, in ways that might outperform existing neural network systems.</p> <p>“We have been doing various kinds of work in AI for a few years now,” Soljačić says. “We use AI to help with our research, basically to do physics better. And as we got to be&nbsp; more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”</p> <p>This approach could be useful in a variety of specific kinds of tasks, he says, but not all. “We can’t say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm.”</p> <p>Neural networks in general are an attempt to mimic the way humans learn certain new things: The computer examines many different examples and “learns” what the key underlying patterns are. Such systems are widely used for pattern recognition, such as learning to identify objects depicted in photos.</p> <p>But neural networks in general have difficulty correlating information from a long string of data, such as is required in interpreting a research paper. Various tricks have been used to improve this capability, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU), but these still fall well short of what’s needed for real natural-language processing, the researchers say.</p> <p>The team came up with an alternative system, which instead of being based on the multiplication of matrices, as most conventional neural networks are, is based on vectors rotating in a multidimensional space. The key concept is something they call a rotational unit of memory (RUM).</p> <p>Essentially, the system represents each word in the text by a vector in multidimensional space — a line of a certain length pointing in a particular direction. Each subsequent word swings this vector in some direction, represented in a theoretical space that can ultimately have thousands of dimensions. At the end of the process, the final vector or set of vectors is translated back into its corresponding string of words.</p> <p>“RUM helps neural networks to do two things very well,” Nakov says. “It helps them to remember better, and it enables them to recall information more accurately.”</p> <p>After developing the RUM system to help with certain tough physics problems such as the behavior of light in complex engineered materials, “we realized one of the places where we thought this approach could be useful would be natural language processing,” says Soljačić,&nbsp; recalling a conversation with Tatalović, who noted that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalović was at the time exploring AI in science journalism as his <a href="">Knight fellowship project</a>.</p> <p>“And so we tried a few natural language processing tasks on it,” Soljačić says. “One that we tried was summarizing articles, and that seems to be working quite well.”</p> <p><strong>The proof is in the reading</strong></p> <p>As an example, they fed the same research paper through a conventional LSTM-based neural network and through their RUM-based system. The resulting summaries were dramatically different.</p> <p>The LSTM system yielded this highly repetitive and fairly technical summary: <em>“Baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat.</em></p> <p>Based on the same paper, the RUM system produced a much more readable summary, and one that did not include the needless repetition of phrases: <em>Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.</em></p> <p>Already, the RUM-based system has been expanded so it can “read” through entire research papers, not just the abstracts, to produce a summary of their contents. The researchers have even tried using the system on their own research paper describing these findings — the paper that this news story is attempting to summarize.</p> <p>Here is the new neural network’s summary: <em>Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.</em></p> <p>It may not be elegant prose, but it does at least hit the key points of information.</p> <p>Çağlar Gülçehre, a research scientist at the British AI company Deepmind Technologies, who was not involved in this work, says this research tackles an important problem in neural networks, having to do with relating pieces of information that are widely separated in time or space. “This problem has been a very fundamental issue in AI due to the necessity to do reasoning over long time-delays in sequence-prediction tasks,” he says. “Although I do not think this paper completely solves this problem, it shows promising results on the long-term dependency tasks such as question-answering, text summarization, and associative recall.”</p> <p>Gülçehre adds, “Since the experiments conducted and model proposed in this paper are released as open-source on Github, as a result many researchers will be interested in trying it on their own tasks. … To be more specific, potentially the approach proposed in this paper can have very high impact on the fields of natural language processing and reinforcement learning, where the long-term dependencies are very crucial.”</p> <p>The research received support from the Army Research Office, the National Science Foundation, the MIT-SenseTime Alliance on Artificial Intelligence, and the Semiconductor Research Corporation. The team also had help from the Science Daily website, whose articles were used in training some of the AI models in this research.</p> A team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can read scientific papers and render a plain-English summary in a sentence or two.Image: Chelsea TurnerResearch, Physics, Artifical intelligence, Machine learning, Language, Algorithms, Knight fellowship, Science writing, Science communications, Technology and society, National Science Foundation (NSF), School of Science, School of Humanities Arts and Social Sciences 3 Questions: What is linguistics? MIT Professor David Pesetsky describes the science of language and how it sheds light on deep properties of the human mind. Wed, 27 Mar 2019 11:55:01 -0400 School of Humanities, Arts, and Social Sciences <p><em>For decades, MIT has been widely held to have one of the best linguistics programs in the world. But what is linguistics and what does it teach us about human language? To learn more about the ways linguists help make a better world, SHASS Communications recently spoke with David Pesetsky, the Ferrari P. Ward Professor of Modern Languages and Linguistics at MIT. A Margaret MacVicar Faculty Fellow (MIT's highest undergraduate teaching award), Pesetsky focuses his research on syntax and the implications of syntactic theory to language acquisition, semantics, phonology, and morphology (word-structure). He is a fellow of the American Association for the Advancement of Science and a fellow of the Linguistic Society of America. </em></p> <p><em>In collaboration with Pesetsky, SHASS Communications also developed a companion piece to his interview, titled "<a href="" target="_blank">The Building Blocks of Linguistics</a>." This brisk overview of basic information about the field includes entries such as: "Make Your Own Personal Dialect Map," "Know Your Linguistics Subfields," and "Top 10 Ways Linguists Help Make a Better World."</em></p> <p><strong>Q</strong>: Linguistics, the science of language, is often a challenging discipline for those outside the field to understand. Can you comment on why that might be?<br /> <br /> <strong>A</strong>: Linguistics is the field that tries to figure out how human language works — for example: how the languages of the world differ, how they are the same, and why; how children acquire language; how languages change over time and why; how we produce and understand language in real time; and how language is processed by the brain.<br /> <br /> These are all very challenging questions, and the linguistic ideas and hypotheses about them are sometimes intricate and highly structured. Still, I doubt that linguistics is intrinsically more daunting than other fields explored at MIT — though it is certainly just as exciting.<br /> <br /> The problems that linguists face in communicating about our discipline mostly arise, I think, from the absence of any foundational teaching about linguistics in our elementary and middle schools. This means that the most basic facts about language — including the building blocks of language and how they combine — remain unknown, even to most well-educated people.<br /> <br /> While it's a challenge for scholars in other major fields to explain cutting-edge discoveries to others, they don’t typically have to start by explaining first principles. A biologist or astronomer speaking to educated adults, for example, can assume they know that the heart pumps blood and that the Earth goes around the sun.&nbsp;<br /> <br /> Linguistics has equivalent facts to those examples, among them: how speech sounds are produced by the vocal tract, and the hierarchical organization of words in a sentence. Our research builds on these fundamentals when phonologists study the complex ways in which languages organize their speech sounds, for example; or when semanticists and syntacticians (like me) study how the structure of a sentence constrains its meaning.<br /> <br /> Unlike our physicist or biologist colleagues, however, we really have to start from scratch each time we discuss our work. That is a challenge that we will continue to face for a while yet, I fear. But there is one silver lining: watching the eyes of our students and colleagues grow wide with excitement when they do learn what's been going on in their own use of language — in their own linguistic heads — all these years. This reliable phenomenon makes <a href="" target="_blank">24.900</a>, MIT’s very popular introductory linguistics undergraduate class, one of my favorite classes to teach. (24.900 is <a href="" target="_blank">also available</a> via MIT OpenCourseWare.)<br /> <br /> <strong>Q:</strong> Can you describe the kinds of questions linguistic scholars explore and why they are important?<br /> <br /> <strong>A:</strong> Linguists study the puzzles of human language from just about every possible angle&nbsp;— its form, its meanings, sound, gesture, change over time, acquisition by children, processing by the brain, role in social interaction, and much more. Here at MIT Linguistics, our research tends to focus on the structural aspects of language, the logic by which its inner workings are organized.<br /> <br /> Our methodologies are diverse. Many of us work closely with speakers of other languages not only to learn about the languages themselves, but also to test hypotheses about language in general. There are also active programs of laboratory research in our department, on language acquisition in children, the online processing of semantics and syntax, phonetics, and more.<br /> <br /> My own current work focuses on a fact about language that looks like the most minor of details — until you learn that more or less the same very fact shows up in language after language, all around the globe!<br /> <br /> The fact is the strange, obligatory shrinkage in the size of a clause when its subject is extracted to another position in the sentence. In English, for example, the subordinating conjunction "that" — which is normally used to introduce a sentence embedded in a larger sentence (linguists call it a “complementizer") — is omitted when the subject is questioned.</p> <p>For example, we say “Who are you sure will smile?” not "Who are you sure that will smile?"</p> <p>Something very similar happens in languages all over the globe. We find it in Bùlì, for example, a language of Ghana; and in dialects of Arabic; and in the Mayan language Kaqchikel. Adding to the significance of this finding: MIT alumnus <a href="" target="_blank">Colin Phillips</a> PhD '96 has shown that, in English at least, this language protocol is acquired by children without any statistically usable evidence for it from the speech they hear around them.<br /> <br /> A phenomenon like this one, found all over the globe and clearly not directly learned from experience, cannot be an accident — but must be a by-product of some deeper general property of the human language faculty, and of the human mind. I am now developing and testing a hypothesis about what this deeper property might be.<br /> <br /> This example also points to one reason linguistics research is exciting. Language is the defining property of our species and to understand how language works is to better understand ourselves. Linguistic research sheds light on many dimensions of the human experience.<br /> <br /> And yet, for all the great advances that my field has made, there are so many fundamental aspects of the human language capacity that we do not properly understand yet. I do not believe that genuine progress can be made on a whole host of language-related problems until we broaden and deepen our understanding of how language works — whether the problem is teaching computers to understand us, teaching children to read, or figuring out the most effective way to learn a second language.<br /> <br /> <strong>Q:</strong> What is the historical relationship between research in linguistics and artificial intelligence (AI), and what roles might linguistics scholarship play in the next era of AI research?<br /> <br /> <strong>A</strong>: The relation between linguistic research and language-related research on AI has been less close than one might expect. One reason might be the different goals of the scholars involved. Historically, the questions about language viewed as most urgent by linguists and AI researchers have not been the same. Consequently, language-related AI has tended to favor end-runs around the findings of linguistics concerning how human language works.<br /> <br /> In recent years, however, the tide has been turning, and one sees more and more interaction and collaboration between the two domains of research, including here at MIT. Under the aegis of the MIT Quest for Intelligence, for example, I've been meeting regularly with a colleague from Electrical Engineering and Computer Science and a colleague from Brain and Cognitive Sciences to explore ways in which research on syntax can inform machine learning for languages that lack extensive bodies of textual material — a precondition for training existing kinds of systems.<br /> <br /> A child acquiring language does this without the aid of the thousands of annotated sentences that machine systems require. An intriguing question, then, is, can we build machines with some of the capabilities of human children, that might not need such aids?</p> <p>I am looking forward to seeing what progress we can make together.</p> <p><span style="font-size:12px;"><em>Story prepared by MIT SHASS Communications</em></span></p> David Pesetsky is the Ferrari P. Ward Professor of Modern Languages and Linguistics at MIT. Photo: Allegra Boverman School of Humanities Arts and Social Sciences, Language, Linguistics, Machine learning, Brain and cognitive sciences, 3 Questions QS World University Rankings rates MIT No. 1 in 11 subjects for 2019 Institute ranks within the top 2 in 17 of 48 subject areas. Tue, 26 Feb 2019 19:00:00 -0500 MIT News Office <p>MIT has been honored with 11 No. 1 subject rankings in the QS World University Rankings for 2019.</p> <p>The Institute received a No. 1 ranking in the following QS subject areas: Chemistry; Computer Science and Information Systems; Chemical Engineering; Civil and Structural Engineering; Electrical and Electronic Engineering; Mechanical, Aeronautical and Manufacturing Engineering; Linguistics; Materials Science; Mathematics; Physics and Astronomy; and Statistics and Operational Research.</p> <p>MIT also placed second in six subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Earth and Marine Sciences; Economics and Econometrics; and Environmental Sciences.</p> <p>Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.</p> <p>MIT has been ranked as the No. 1 university in the world by QS World University Rankings for seven straight years.</p> Rankings, Computer science and technology, Linguistics, Chemical engineering, Civil and environmental engineering, Mechanical engineering, Chemistry, Materials science, Mathematics, Physics, Economics, EAPS, Business and management, Accounting, Finance, DMSE, School of Engineering, School of Science, School of Architecture and Planning, Sloan School of Management, School of Humanities Arts and Social Sciences, Electrical Engineering & Computer Science (eecs), Architecture, Biology, Aeronautical and astronautical engineering Peering under the hood of fake-news detectors Study uncovers language patterns that AI models link to factual and false articles; underscores need for further testing. Wed, 06 Feb 2019 00:00:00 -0500 Rob Matheson | MIT News Office <p>New work from MIT researchers peers under the hood of an automated fake-news detection system, revealing how machine-learning models catch subtle but consistent differences in the language of factual and false stories. The research also underscores how fake-news detectors should undergo more rigorous testing to be effective for real-world applications.</p> <p>Popularized as a concept in the United States during the 2016 presidential election, fake news is a form of propaganda created to mislead readers, in order to generate views on websites or steer public opinion.</p> <p>Almost as quickly as the issue became mainstream, researchers began developing automated fake news detectors —&nbsp;so-called neural networks that “learn” from scores of data to recognize linguistic cues indicative of false articles. Given new articles to assess, these networks can, with fairly high accuracy, separate fact from fiction, in controlled settings.</p> <p>One issue, however, is the “black box” problem — meaning there’s no telling what linguistic patterns the networks analyze during training. They’re also trained and tested on the same topics, which may limit their potential to generalize to new topics, a necessity for analyzing news across the internet.</p> <p>In a paper presented at the Conference and Workshop on Neural Information Processing Systems, the researchers tackle both of those issues. They developed a deep-learning model that learns to detect language patterns of fake and real news. Part of their work “cracks open” the black box to find the words and phrases the model captures to make its predictions.</p> <p>Additionally, they tested their model on a novel topic it didn’t see in training. This approach classifies individual articles based solely on language patterns, which more closely represents a real-world application for news readers. Traditional fake news detectors classify articles based on text combined with source information, such as a Wikipedia page or website.</p> <p>“In our case, we wanted to understand what was the decision-process of the classifier based only on language, as this can provide insights on what is the language of fake news,” says co-author Xavier Boix, a postdoc in the lab of Tomaso Poggio, the Eugene McDermott Professor in the Department of Brain and Cognitive Sciences (BCS) and director of the Center for Brains, Minds, and Machines (CBMM), a National Science Foundation-funded center housed within the McGovern Institute of Brain Research.</p> <p>“A key issue with machine learning and artificial intelligence is that you get an answer and don’t know why you got that answer,” says graduate student and first author Nicole O’Brien ’17. “Showing these inner workings takes a first step toward understanding the reliability of deep-learning fake-news detectors.”</p> <p>The model identifies sets of words that tend to appear more frequently in either real or fake news — some perhaps obvious, others much less so. The findings, the researchers say, points to subtle yet consistent differences in fake news —&nbsp;which favors exaggerations and superlatives — and real news, which leans more toward conservative word choices.</p> <p>“Fake news is a threat for democracy,” Boix says. “In our lab, our objective isn’t just to push science forward, but also to use technologies to help society. … It would be powerful to have tools for users or companies that could provide an assessment of whether news is fake or not.”</p> <p>The paper’s other co-authors are Sophia Latessa, an undergraduate student in CBMM; and Georgios Evangelopoulos, a researcher in CBMM, the McGovern Institute, and the Laboratory for Computational and Statistical Learning.</p> <p><strong>Limiting bias</strong></p> <p>The researchers’ model is a convolutional neural network that trains on a dataset of fake news and real news. For training and testing, the researchers used a popular fake news research dataset, called Kaggle, which contains around 12,000 fake news sample articles from 244 different websites. They also compiled a dataset of real news samples, using more than 2,000 from the <em>New York Times</em> and more than 9,000 from <em>The Guardian</em>.</p> <p>In training, the model captures the language of an article as “word embeddings,” where words are represented as vectors — basically, arrays of numbers —&nbsp;with words of similar semantic meanings clustered closer together. In doing so, it captures triplets of words as patterns that provide some context — such as, say, a negative comment about a political party. Given a new article, the model scans the text for similar patterns and sends them over a series of layers. A final output layer determines the probability of each pattern: real or fake.</p> <p>The researchers first trained and tested the model in the traditional way, using the same topics. But they thought this might create an inherent bias in the model, since certain topics are more often the subject of fake or real news. For example, fake news stories are generally more likely to include the words “Trump” and “Clinton.”</p> <p>“But that’s not what we wanted,” O’Brien says. “That just shows topics that are strongly weighting in fake and real news. … We wanted to find the actual patterns in language that are indicative of those.”</p> <p>Next, the researchers trained the model on all topics without any mention of the word “Trump,” and tested the model only on samples that had been set aside from the training data and that did contain the word “Trump.” While the traditional approach reached 93-percent accuracy, the second approach reached 87-percent accuracy. This accuracy gap, the researchers say, highlights the importance of using topics held out from the training process, to ensure the model can generalize what it has learned to new topics.</p> <p><strong>More research needed</strong></p> <p>To open the black box, the researchers then retraced their steps. Each time the model makes a prediction about a word triplet, a certain part of the model activates, depending on if the triplet is more likely from a real or fake news story. The researchers designed a method to retrace each prediction back to its designated part and then find the exact words that made it activate.&nbsp; &nbsp;&nbsp;</p> <p>More research is needed to determine how useful this information is to readers, Boix says. In the future, the model could potentially be combined with, say, automated fact-checkers and other tools to give readers an edge in combating misinformation. After some refining, the model could also be the basis of a browser extension or app that alerts readers to potential fake news language.</p> <p>“If I just give you an article, and highlight those patterns in the article as you’re reading, you could assess if the article is more or less fake,” he says. “It would be kind of like a warning to say, ‘Hey, maybe there is something strange here.’”</p> <p>“The work touches two very hot research topics: fighting algorithmic bias and explainable AI,” says Preslav Nakov, a senior scientist at the Qatar Computing Research Institute, part of Hamad bin Khalifa University, whose work focuses on fake news. “In particular, the authors make sure that their approach is not fooled by the prevalence of some topics in fake versus real news. They further show that they can trace back the algorithm's decision back to specific words in the input article."</p> <p>But Nakov also offers a word of caution: it’s difficult to control for many different types of biases in language. For example, the researchers use real news mostly from <em>The New York Times</em> and <em>The Guardian</em>. The next question, he says, is “how do we make sure that a system trained on this dataset would not learn that real news must necessarily follow the writing style of these two specific news outlets?”</p> Image: MIT NewsResearch, Computer science and technology, Algorithms, Politics, Technology and society, Language, Artificial intelligence, Ethics, Social media, Writing, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering Putting neural networks under the microscope Researchers pinpoint the “neurons” in machine-learning systems that capture specific linguistic features during language-processing tasks. Fri, 01 Feb 2019 00:00:00 -0500 Rob Matheson | MIT News Office <p>Researchers from MIT and the Qatar Computing Research Institute (QCRI) are putting the machine-learning systems known as neural networks under the microscope.</p> <p>In a study that sheds light on how these systems manage to translate text from one language to another, the researchers developed a method that pinpoints individual nodes, or “neurons,” in the networks that capture specific linguistic features.</p> <p>Neural networks learn to perform computational tasks by processing huge sets of training data. In machine translation, a network crunches language data annotated by humans, and presumably “learns” linguistic features, such as word morphology, sentence structure, and word meaning. Given new text, these networks match these learned features from one language to another, and produce a translation.</p> <p>But, in training, these networks basically adjust internal settings and values in ways the creators can’t interpret. For machine translation, that means the creators don’t necessarily know which linguistic features the network captures.</p> <p>In a paper being presented at this week’s Association for the Advancement of Artificial Intelligence conference, the researchers describe a method that identifies which neurons are most active when classifying specific linguistic features. They also designed a toolkit for users to analyze and manipulate how their networks translate text for various purposes, such as making up for any classification biases in the training data.</p> <p>In their paper, the researchers pinpoint neurons that are used to classify, for instance, gendered words, past and present tenses, numbers at the beginning or middle of sentences, and plural and singular words. They also show how some of these tasks require many neurons, while others require only one or two.</p> <p>“Our research aims to look inside neural networks for language and see what information they learn,” says co-author Yonatan Belinkov, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “This work is about gaining a more fine-grained understanding of neural networks and having better control of how these models behave.”</p> <p>Co-authors on the paper are: senior research scientist James Glass and undergraduate student Anthony Bau, of CSAIL; and Hassan Sajjad, Nadir Durrani, and Fahim Dalvi, of QCRI, part of Hamad Bin Khalifa University.&nbsp;</p> <p><strong>Putting a microscope on neurons</strong></p> <p>Neural networks are structured in layers, where each layer consists of many processing nodes, each connected to nodes in layers above and below. Data are first processed in the lowest layer, which passes an output to the above layer, and so on. Each output has a different “weight” to determine how much it figures into the next layer’s computation. During training, these weights are constantly readjusted.</p> <p>Neural networks used for machine translation train on annotated language data. In training, each layer learns different “word embeddings” for one word. Word embeddings are essentially tables of several hundred numbers combined in a way that corresponds to one word and that word’s function in a sentence. Each number in the embedding is calculated by a single neuron.</p> <p>In their <a href="">past work</a>, the researchers trained a model to analyze the weighted outputs of each layer to determine how the layers classified any given embedding. They found that lower layers classified relatively simpler linguistic features — such as the structure of a particular word —&nbsp;and higher levels helped classify more complex features, such as how the words combine to form meaning.</p> <p>In their new work, the researchers use this approach to determine how learned word embeddings make a linguistic classification. But they also implemented a new technique, called “linguistic correlation analysis,” that trains a model to home in on the individual neurons in each word embedding that were most important in the classification.</p> <p>The new technique combines all the embeddings captured from different layers — which each contain information about the word’s final classification — into a single embedding. As the network classifies a given word, the model learns weights for every neuron that was activated during each classification process. This provides a weight to each neuron in each word embedding that fired for a specific part of the classification.</p> <p>“The idea is, if this neuron is important, there should be a high weight that’s learned,” Belinkov says. “The neurons with high weights are the ones more important to predicting the certain linguistic property. You can think of the neurons as a lot of knobs you need to turn to get the correct combination of numbers in the embedding. Some knobs are more important than others, so the technique is a way to assign importance to those knobs.”</p> <p><strong>Neuron ablation, model manipulation</strong></p> <p>Because each neuron is weighted, it can be ranked in order of importance. To that end, the researchers designed a toolkit, called NeuroX, that automatically ranks all neurons of a neural network according to their importance and visualizes them in a web interface.</p> <p>Users upload a network they’ve already trained, as well as new text. The app displays the text and, next to it, a list of specific neurons, each with an identification number. When a user clicks on a neuron, the text will be highlighted depending on which words and phrases the neuron activates for. From there, users can completely knock out — or “ablate” —&nbsp;the neurons, or modify the extent of their activation, to control how the network translates.</p> <p>The task of ablation was used to determine if the researchers’ method accurately pinpointed the correct high-ranking neurons. In their paper, the researchers used the method to show that, by ablating high ranking neurons in a network, its performance in classifying correlated linguistic features dipped significantly. Alternatively, when they ablated lower-ranking neurons, performance suffered, but not as dramatically.</p> <p>“After you get all these rankings, you want to see what happens when you kill these neurons and see how badly it affects performance,” Belinkov says. “That’s an important result proving that the neurons we find are, in fact, important to the classification process.”</p> <p>One interesting application for the method is helping limit biases in language data. Machine-translation models, such as Google Translate, may train on data with gender bias, which can be problematic for languages with gendered words. Certain professions, for instance, may be more often referred to as male, and others as female. When a network translates new text, it may only produce the learned gender for those words. In many online English-to-Spanish translations, for instance, “doctor” often translates into its masculine version, while “nurse” translates into its feminine version.</p> <p>“But we find we can trace individual neurons in charge of linguistic properties like gender,” Belinkov says. “If you’re able to trace them, maybe you can intervene somehow and influence the translation to translate these words more to the opposite gender … to remove or mitigate the bias.”</p> <p>In preliminary experiments, the researchers modified neurons in a network to change translated text from past to present tense with 67 percent accuracy. They modified to switch the gender of the words with 21 percent accuracy. “It’s still a work in progress,” Belinkov says. A next step, he adds, is improving the methodology to achieve more accurate ablation and manipulation.</p> Researchers from MIT and the Qatar Computing Research Institute (QCRI) are putting the machine-learning systems known as neural networks under the microscope. Image: MIT NewsResearch, Computer science and technology, Algorithms, Language, Machine learning, Artificial intelligence, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering I think, therefore I code Senior Jessy Lin, a double major in EECS and philosophy, is programming for social good. Thu, 15 Nov 2018 23:59:59 -0500 Gina Vitale | MIT News correspondent <p>To most of us, a 3-D-printed turtle just looks like a turtle; four legs, patterned skin, and a shell. But if you show it to a particular computer in a certain way, that object’s not a turtle — it’s a gun.</p> <p>Objects or images that can fool artificial intelligence like this are called adversarial examples. Jessy Lin, a senior double-majoring in computer science and electrical engineering and in philosophy, believes that they’re a serious problem, with the potential to trip up AI systems involved in driverless cars, facial recognition, or other applications. She and several other MIT students have formed a research group called LabSix, which creates examples of these AI adversaries in real-world settings — such as <a href="">the turtle identified as a rifle</a> — to show that they are legitimate concerns.</p> <p>Lin is also working on a project called Sajal, which is a system that could allow refugees to give their medical records to doctors via a QR code. This “mobile health passport” for refugees was born out of VHacks, a hackathon organized by the Vatican, where Lin worked with a team of people she’d met only a week before. The theme was to build something for social good — a guiding principle for Lin since her days as a hackathon-frequenting high school student.</p> <p>“It’s kind of a value I’ve always had,” Lin says. “Trying to be thoughtful about, one, the impact that the technology that we put out into the world has, and, two, how to make the best use of our skills as computer scientists and engineers to do something good.”</p> <p><strong>Clearer thinking through philosophy</strong></p> <p>AI is one of Lin’s key interests in computer science, and she’s currently working in the Computational Cognitive Science group of Professor Josh Tenenbaum, which develops computational models of how humans and machines learn. The knowledge she’s gained through her other major, philosophy, relates more closely this work than it might seem, she says.</p> <p>“There are a lot of ideas in [AI and language-learning] that tie into ideas from philosophy,” she says. “How the mind works, how we reason about things in the world, what concepts are. There are all these really interesting abstract ideas that I feel like … studying philosophy surprisingly has helped me think about better.”</p> <p>Lin says she didn’t know a lot about philosophy coming into college. She liked the first class she took, during her first year, so she took another one, and another — before she knew it,&nbsp;she was hooked. It started out as a minor; this past spring, she declared it as a major.</p> <p>“It helped me structure my thoughts about the world in general, and think more clearly about all kinds of things,” she says.</p> <p>Through an interdisciplinary class on ethics and AI ethics, Lin realized the importance of incorporating perspectives from people who don’t work in computer science. Rather than writing those perspectives off, she wants to be someone inside the tech field who considers issues from a humanities perspective and listens to what people in other disciplines have to say.</p> <p><strong>Teaching computers to talk</strong></p> <p>Computers don’t learn languages the way that humans do — at least, not yet. Through her work in the Tenenbaum lab, Lin is trying to change that.</p> <p>According to one hypothesis, when humans hear words, we figure out what they are by first saying them to ourselves in our heads. Some computer models aim to recreate this process, including recapitulating the individual sounds in a word. These “generative” models do capture some aspects of human language learning, but they have other drawbacks that make them impractical for use with real-world speech.</p> <p>On the other hand, AI systems known as neural networks, which are trained on huge sets of data, have shown great success with speech recognition. Through several projects, Lin has been working on combining the strengths of both types of models, to better understand, for example, how children learn language even at a very young age.</p> <p>Ultimately, Lin says, this line of research could contribute to the development of machines that can speak in a more flexible, human way.</p> <p><strong>Hackathons and other pastimes</strong></p> <p>Lin first discovered her passion for computer science at Great Neck North High School on Long Island, New York, where she loved staying up all night to create computer programs during hackathons. (More recently, Lin has played a key role in HackMIT, one of the Institute’s flagship hackathons. Among other activities, she helped organize the event from 2015 to 2017, and in 2016 was the director of corporate relations and sponsorship.) It was also during high school that she began to attend MIT Splash, a program hosted on campus offering a variety of classes for K-12 students.</p> <p>“I was one of those people that always had this dream to come to MIT,” she says.</p> <p>Lin says her parents and her two sisters have played a big role in supporting those dreams. However, her knack for artificial intelligence doesn’t seem to be genetic.</p> <p>“My mom has her own business, and my dad is a lawyer, so … who knows where computer science came out of that?” she says, laughing.</p> <p>In recent years, Lin has put her computer science skills to use in a variety of ways. While in high school, she interned at both New York University and Columbia University. During Independent Activities Period in 2018, she worked on security for Fidex, a friend’s cryptocurrency exchange startup. The following summer she interned at Google Research NYC on the natural language understanding team, where she worked on developing memory mechanisms that allow a machine to have a longer-term memory. For instance, a system would remember not only the last few phrases it read in a book, but a character from several chapters back. Lin now serves as a campus ambassador for Sequoia Capital, supporting entrepreneurship on campus.</p> <p>She currently lives in East Campus, where she enjoys the “very vibrant dorm culture.” Students there organize building projects for each first-year orientation —&nbsp;when Lin arrived, they built a roller coaster. She’s helped with the building in the years since, including a geodesic dome that was taller than she is. Outside of class and building projects, she also enjoys photography.</p> <p>Ultimately, Lin’s goal is to use her computer science skills to benefit the world. About her future after MIT, she says, “I think it could look something like trying to figure out how we can design AI that is increasingly intelligent but interacts with humans better.”</p> Jessy Lin, an MIT senior double-majoring in electrical engineering and computer science and in philosophy.Image: Bryce Vickmarkstudent, Undergraduate, Profile, Electrical Engineering & Computer Science (eecs), Philosophy, School of Engineering, School of Humanities Arts and Social Sciences, Technology and society, Humanities, Computer science and technology, Machine learning, Artificial intelligence, Algorithms, Language, Brain and cognitive science Times Higher Education ranks MIT No.1 in business and economics, No.2 in arts and humanities Worldwide honors for 2019 span three MIT schools. Thu, 15 Nov 2018 13:25:01 -0500 School of Humanities, Arts, and Social Sciences <p>MIT has taken the top spot in the Business and Economics subject category in the 2019 Times Higher Education World University Rankings and, for the second year in a row, the No. 2 spot worldwide for Arts and Humanities.<br /> <br /> The Times Higher Education World University Rankings is an annual publication of university rankings by&nbsp;<em>Times Higher Education,</em> a leading British education magazine. The rankings use a set of 13 rigorous performance indicators to evaluate schools both overall and within individual fields. Criteria include teaching and learning environment, research volume and influence, and international outlook.</p> <p><strong>Business and Economics</strong></p> <p>The No. 1 ranking for Business and Economics is based on an evaluation of both the MIT Department of Economics — housed in the MIT School of Humanities, Arts, and Social Sciences — and of the MIT Sloan School of Management.</p> <p>“We are always delighted when the high quality of work going on in our school and across MIT is recognized, and warmly congratulate our colleagues in MIT Sloan with whom we share this honor,” said Melissa Nobles, the Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences (SHASS).</p> <p>The Business and Economics ranking evaluated 585 universities for their excellence in business, management, accounting, finance, economics, and econometrics subjects. In this category, MIT was followed by Stanford University and Oxford University.</p> <p>“Being recognized as first in business and management is gratifying and we are thrilled to share the honors with our colleagues in the MIT Department of Economics and MIT SHASS,” said David Schmittlein, dean of MIT Sloan.</p> <p>MIT has long been a powerhouse in economics. For over a century, the Department of Economics at MIT has played a leading role in economics education, research, and public service and the department’s faculty have won a total of nine Nobel Prizes over the years. MIT Sloan faculty have also won two Nobels, and the school is known as a driving force behind MIT’s entrepreneurial ecosystem: Companies started by MIT alumni have created millions of jobs and generate nearly $2 trillion a year in revenue.</p> <p><strong>Arts and Humanities</strong></p> <p>The Arts and Humanities ranking evaluated 506 universities that lead in art, performing arts, design, languages, literature, linguistics, history, philosophy, theology, architecture, and archaeology subjects. MIT was rated just below Stanford and above Harvard University in this category. MIT’s high ranking reflects the strength of both the humanities disciplines and performing arts located in MIT SHASS and the design fields and humanistic work located in MIT’s School of Architecture and Planning (SA+P).</p> <p>At MIT, outstanding humanities and arts programs in SHASS — including literature; history; music and theater arts; linguistics; philosophy; comparative media studies; writing; languages; science, technology and society; and women’s and gender studies — sit alongside equally strong initiatives within SA+P in the arts; architecture; design; urbanism; and history, theory, and criticism. SA+P is also home to the Media Lab, which focuses on unconventional research in technology, media, science, art, and design.</p> <p>“The recognition from <em>Times Higher Education</em> confirms the importance of creativity and human values in the advancement of science and technology,” said Hashim Sarkis, dean of SA+P. “It also rewards MIT’s longstanding commitment to “The Arts” — words that are carved in the Lobby 7 dome signifying one of the main areas for the application of technology.”</p> <p>Receiving awards in multiple categories and in categories that span multiple schools at MIT is a recognition of the success MIT has had in fostering cross-disciplinary thinking, said Dean Nobles.</p> <p>“It’s a testament to the strength of MIT’s model that these areas of scholarship and pedagogy are deeply seeded in multiple administrative areas,” Nobles said. “At MIT, we know that solving challenging problems requires the combined insight and knowledge from many fields. The world’s complex issues are not only scientific and technological problems; they are as much human and ethical problems.”</p> “At MIT, we know that solving challenging problems requires the combined knowledge and insight from many fields. The world’s complex issues are not only scientific and technological problems; they are as much human and ethical problems,” says Melissa Nobles, the Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences.Photo: Madcoverboy/Wikimedia CommonsAwards, honors and fellowships, Arts, Architecture, Business and management, Comparative Media Studies/Writing, Economics, Global Studies and Languages, Humanities, History, Literature, Linguistics, Management, Music, Philosophy, Theater, Urban studies and planning, Rankings, Media Lab, School of Architecture and Planning, Sloan School of Management, School of Humanities Arts and Social Sciences Professor Emerita Catherine Chvany, Slavic scholar, dies at 91 Internationally renowned for her works in Slavic poetics and linguistics, Chvany also mentored several generations of scholars. Tue, 13 Nov 2018 15:10:01 -0500 School of Humanities, Arts, and Social Sciences <p>Professor Emerita Catherine Vakar Chvany, a renowned Slavic linguist and literature scholar who played a pivotal role in advancing the study of Russian language and literature in MIT’s Foreign Languages and Literatures Section (now Global Studies and Languages), died on Oct. 19 in Watertown, Massachusetts. She was 91.<br /> <br /> Chvany served on the MIT faculty for 26 years before her retirement in 1993.<br /> <br /> Global Studies and Languages head Emma Teng noted that MIT’s thriving Russian studies curriculum today is a legacy of Chvany’s foundational work in the department. And, Maria Khotimsky, senior lecturer in Russian, said, “Several generations of Slavists are grateful for Professor Chvany’s inspiring mentorship, while her works in Slavic poetics and linguistics are renowned in the U.S. and internationally.”<br /> <br /> <strong>A prolific and influential scholar</strong><br /> <br /> A prolific scholar, Chvany wrote "On the Syntax of Be-Sentences in Russian" (Slavica Publishers, 1975); and co-edited four volumes: "New Studies in Russian Language and Literature" (Slavica, 1987); "Morphosyntax in Slavic" (Slavica, 1980); "Slavic Transformational Syntax" (University of Michigan, 1974); and "Studies in Poetics: Commemorative Volume: Krystyna Pomorska" (Slavica Publishers, 1995).<br /> <br /> In 1996, linguists Olga Yokoyama and Emily Klenin published an edited collection of her work, "Selected Essays of Catherine V. Chvany" (Slavica).<br /> <br /> In her articles, Chvany took up a range of issues in linguistics, including not only variations on the verb “to be” but also hierarchies of situations in syntax of agents and subjects; definiteness in Bulgarian, English, and Russian; other issues of lexical storage and transitivity; hierarchies in Russian cases; and issues of markedness, including an important overview, “The Evolution of the Concept of Markedness from the Prague Circle to Generative Grammar.”<br /> <br /> In literature she took up language issues in the classic "Tale of Igor's Campaign," Teffi’s poems, Nikolai Leskov’s short stories, and a novella by Aleksandr Solzhenitsyn.<br /> <br /> <strong>From Paris to Cambridge&nbsp; </strong><br /> <br /> “Catherine Chvany was always so present that it is hard to think of her as gone,” said MIT Literature Professor Ruth Perry. “She had strong opinions and wasn't afraid to speak out about them.”<br /> <br /> Chvany was born on April 2, 1927, in Paris, France, to émigré Russian parents. During World War II, she and her younger sister Anna were sent first to the Pyrenees and then to the United States with assistance from a courageous young Unitarian minister’s wife, Martha Sharp.<br /> <br /> Fluent in Russian and French, Chvany quickly mastered English. She graduated from the Girls’ Latin School in Boston in 1946 and attended Radcliffe College from 1946 to 1948. She left school to marry Lawrence Chvany and raise three children, Deborah, Barbara, and Michael.<br /> <br /> In 1961-63, she returned to school and completed her undergraduate degree in linguistics at Harvard University. She received her PhD in Slavic languages and literatures from Harvard in 1970 and began her career as an instructor of Russian language at Wellesley College in 1966.<br /> <br /> She joined the faculty at MIT in 1967 and became an assistant professor in 1971, an associate professor in 1974, and a full professor in 1983.<br /> <br /> <strong>Warmth, generosity, and friendship </strong><br /> <br /> Historian Philip Khoury, who was dean of the School of Humanities, Arts and Social Sciences during the latter years of Chvany’s time at MIT, remembered her warmly as “a wonderful colleague who loved engaging with me on language learning and how the MIT Russian language studies program worked.”<br /> <br /> Elizabeth Wood, a professor of Russian history, recalled the warm welcome that Chvany gave her when she came to MIT in 1990: “She always loved to stop and talk at the Tuesday faculty lunches, sharing stories of her life and her love of Slavic languages.”<br /> <br /> Chvany’s influence was broad and longstanding, in part as a result of her professional affiliations. Chvany served on the advisory or editorial boards of "Slavic and East European Journal," "Russian Language Journal," "Journal of Slavic Linguistics," "Peirce Seminar Papers," "Essays in Poetics" (United Kingdom),&nbsp;and "Supostavitelno ezikoznanie" (Bulgaria).<br /> <br /> Emily Klenin, an emerita professor of Slavic languages and literature at the University of California at Los Angeles, noted that Chvany had a practice of expressing gratitude to those whom she mentored. She connected that practice to Chvany’s experience of being aided during WWII. “Her warm and open attitude toward life was reflected in her continuing interest and friendship for the young people she mentored, even when, as most eventually did, they went on to lives involving completely different academic careers or even no academic career at all,” Klenin said.<br /> <br /> <strong>Memorial reception at MIT on November 18</strong><br /> <br /> Chvany is survived by her children, Deborah&nbsp;Gyapong&nbsp;and her husband Tony of Ottawa, Canada; Barbara Chvany and her husband Ken Silbert of Orinda, California; and Michael Chvany and his wife Sally of Arlington, Massachusetts; her foster-brother, William Atkinson of Cambridge, Massachusetts; six grandchildren; and nine great grandchildren.<br /> <br /> A memorial reception will be held on Sunday, Nov. 18, from 1:30 to 4:00 p.m. in the Samberg Conference Center, 7th floor. Donations in Chvany’s name may be made to the Unitarian Universalist Association. Visit <a href="" target="_blank">Friends of the UUA</a> for online donations. Please RSVP to Michael Chvany, <a href=""></a>, if you plan to attend the memorial.</p> MIT Professor Emerita Catherine Vakar Chvany was a renowned Slavic linguist and literature scholar.Photo courtesy of the Chvany family.Faculty, Global Studies and Languages, Language, Linguistics, Literature, School of Humanities Arts and Social Sciences, Obituaries Machines that learn language more like kids do Computer model could improve human-machine interaction, provide insight into how children learn language. Wed, 31 Oct 2018 11:52:58 -0400 Rob Matheson | MIT News Office <p>Children learn language by observing their environment, listening to the people around them, and connecting the dots between what they see and hear. Among other things, this helps children establish their language’s word order, such as where subjects and verbs fall in a sentence.</p> <p>In computing, learning language is the task of syntactic and semantic parsers. These systems are trained on sentences annotated by humans that describe the structure and meaning behind words. Parsers are becoming increasingly important for web searches, natural-language database querying, and voice-recognition systems such as Alexa and Siri. Soon, they may also be used for home robotics.</p> <p>But gathering the annotation data can be time-consuming and difficult for less common languages. Additionally, humans don’t always agree on the annotations, and the annotations themselves may not accurately reflect how people naturally speak.</p> <p>In a paper being presented at this week’s Empirical Methods in Natural Language Processing conference, MIT researchers describe a parser that learns through observation to more closely mimic a child’s language-acquisition process,&nbsp;which could greatly extend the parser’s capabilities. To learn the structure of language, the parser observes captioned videos, with no other information, and associates the words with recorded objects and actions. Given a new sentence, the parser can then use what it’s learned about the structure of the language to accurately predict a sentence’s meaning, without the video.</p> <p>This “weakly supervised” approach — meaning it requires limited training data — mimics how children can observe the world around them and learn language, without anyone providing direct context. The approach could expand the types of data and reduce the effort needed for training parsers, according to the researchers. A few directly annotated sentences, for instance, could be combined with many captioned videos, which are easier to come by, to improve performance.</p> <p>In the future, the parser could be used to improve natural interaction between humans and personal robots. A robot equipped with the parser, for instance, could constantly observe its environment to reinforce its understanding of spoken commands, including when the spoken sentences aren’t fully grammatical or clear. “People talk to each other in partial sentences, run-on thoughts, and jumbled language. You want a robot in your home that will adapt to their particular way of speaking … and still figure out what they mean,” says co-author Andrei Barbu, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute.</p> <p>The parser could also help researchers better understand how young children learn language. “A child has access to redundant, complementary information from different modalities, including hearing parents and siblings talk about the world, as well as tactile information and visual information, [which help him or her] to understand the world,” says co-author Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL. “It’s an amazing puzzle, to process all this simultaneous sensory input. This work is part of bigger piece to understand how this kind of learning happens in the world.”</p> <p>Co-authors on the paper are: first author Candace Ross, a graduate student in the Department of Electrical Engineering and Computer Science and CSAIL, and a researcher in CBMM; Yevgeni Berzak PhD ’17, a postdoc in the Computational Psycholinguistics Group in the Department of Brain and Cognitive Sciences; and CSAIL graduate student Battushig Myanganbayar.</p> <p><strong>Visual learner</strong></p> <p>For their work, the researchers combined a semantic parser with a computer-vision component trained in object, human, and activity recognition in video. Semantic parsers are generally trained on sentences annotated with code that ascribes meaning to each word and the relationships between the words. Some have been trained on still images or computer simulations.</p> <p>The new parser is the first to be trained using video, Ross says. In part, videos are more useful in reducing ambiguity. If the parser is unsure about, say, an action or object in a sentence, it can reference the video to clear things up. “There are temporal components —&nbsp;objects interacting with each other and with people —&nbsp;and high-level properties you wouldn’t see in a still image or just in language,” Ross says.</p> <p>The researchers compiled a dataset of about 400 videos depicting people carrying out a number of actions, including picking up an object or putting it down, and walking toward an object. Participants on the crowdsourcing platform Mechanical Turk then provided 1,200 captions for those videos. They set aside 840 video-caption examples for training and tuning, and used 360 for testing. One advantage of using vision-based parsing is “you don’t need nearly as much data — although if you had [the data], you could scale up to huge datasets,” Barbu says.</p> <p>In training, the researchers gave the parser the objective of determining whether a sentence accurately describes a given video. They fed the parser a video and matching caption. The parser extracts possible meanings of the caption as logical mathematical expressions. The sentence, “The woman is picking up an apple,” for instance, may be expressed as: <em>λxy.</em> woman <em>x,</em> pick_up <em>x y</em>, apple <em>y</em>.</p> <p>Those expressions and the video are inputted to the computer-vision algorithm, called “Sentence Tracker,” developed by Barbu and other researchers. The algorithm looks at each video frame to track how objects and people transform over time, to determine if actions are playing out as described. In this way, it determines if the meaning is possibly true of the video.</p> <p><strong>Connecting the dots</strong></p> <p>The expression with the most closely matching representations for objects, humans, and actions becomes the most likely meaning of the caption. The expression, initially, may refer to many different objects and actions in the video, but the set of possible meanings serves as a training signal that helps the parser continuously winnow down possibilities. “By assuming that all of the sentences must follow the same rules, that they all come from the same language, and seeing many captioned videos, you can narrow down the meanings further,” Barbu says.</p> <p>In short, the parser learns through passive observation: To determine if a caption is true of a video, the parser by necessity must identify the highest probability meaning of the caption. “The only way to figure out if the sentence is true of a video [is] to go through this intermediate step of, ‘What does the sentence mean?’ Otherwise, you have no idea how to connect the two,” Barbu explains. “We don’t give the system the meaning for the sentence. We say, ‘There’s a sentence and a video. The sentence has to be true of the video. Figure out some intermediate representation that makes it true of the video.’”</p> <p>The training produces a syntactic and semantic grammar for the words it’s learned. Given a new sentence, the parser no longer requires videos, but leverages its grammar and lexicon to determine sentence structure and meaning.</p> <p>Ultimately, this process is learning “as if you’re a kid,” Barbu says. “You see world around you and hear people speaking to learn meaning. One day, I can give you a sentence and ask what it means and, even without a visual, you know the meaning.”</p> <p>“This research is exactly the right direction for natural language processing,” says Stefanie Tellex,&nbsp;a professor of computer science at Brown University who focuses on helping robots use natural language to communicate with humans. “To interpret&nbsp;grounded language, we need semantic representations, but it is not&nbsp;practicable to make it available at training time. Instead, this work captures representations of compositional&nbsp;structure using context from captioned videos. This is the paper I have been waiting for!”</p> <p>In future work, the researchers are interested in modeling interactions, not just passive observations. “Children interact with the environment as they’re learning. Our idea is to have a model that would also use perception to learn,” Ross says.</p> <p>This work was supported, in part, by the CBMM, the National Science Foundation, a Ford Foundation Graduate Research Fellowship, the Toyota Research Institute, and the MIT-IBM Brain-Inspired Multimedia Comprehension project.</p> MIT researchers have developed a “semantic parser” that learns through observation to more closely mimic a child’s language-acquisition process, which could greatly extend computing’s capabilities.Photo: MIT NewsResearch, Language, Machine learning, Artificial intelligence, Data, Computer vision, Human-computer interaction, McGovern Institute, Center for Brains Minds and Machines, Robots, Robotics, National Science Foundation (NSF), Computer science and technology, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical Engineering & Computer Science (eecs), School of Engineering, MIT-IBM Watson AI Lab Model paves way for faster, more efficient translations of more languages New system may open up the world’s roughly 7,000 spoken languages to computer-based translation. Tue, 30 Oct 2018 10:59:55 -0400 Rob Matheson | MIT News Office <p>MIT researchers have developed a novel “unsupervised” language translation model —&nbsp;meaning it runs without the need for human annotations and guidance — that could lead to faster, more efficient computer-based translations of far more languages.</p> <p>Translation systems from Google, Facebook, and Amazon require training models to look for patterns in millions of documents —&nbsp;such as legal and political documents, or news articles —&nbsp;that have been translated into various languages by humans. Given new words in one language, they can then find the matching words and phrases in the other language.</p> <p>But this translational data is time consuming and difficult to gather, and simply may not exist for many of the 7,000 languages spoken worldwide. Recently, researchers have been developing “monolingual” models that make translations between texts in two languages, but without direct translational information between the two.</p> <p>In a paper being presented this week at the Conference on Empirical Methods in Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) describe a model that runs faster and more efficiently than these monolingual models.</p> <p>The model leverages a metric in statistics, called Gromov-Wasserstein distance, that essentially measures distances between points in one computational space and matches them to similarly distanced points in another space. They apply that technique to “word embeddings” of two languages, which are words represented as vectors — basically, arrays of numbers —&nbsp;with words of similar meanings clustered closer together. In doing so, the model quickly aligns the words, or vectors, in both embeddings that are most closely correlated by relative distances, meaning they’re likely to be direct translations.</p> <p>In experiments, the researchers’ model performed as accurately as state-of-the-art monolingual models —&nbsp;and sometimes more accurately —&nbsp;but much more quickly and using only a fraction of the computation power.</p> <p>“The model sees the words in the two languages as sets of vectors, and maps [those vectors] from one set to the other by essentially preserving relationships,” says the paper’s co-author Tommi Jaakkola, a CSAIL researcher and the Thomas Siebel Professor in the Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society. “The approach could help translate low-resource languages or dialects, so long as they come with enough monolingual content.”</p> <p>The model represents a step toward one of the major goals of machine translation, which is fully unsupervised word alignment, says first author David Alvarez-Melis, a CSAIL PhD student: “If you don’t have any data that matches two languages … you can map two languages and, using these distance measurements, align them.”</p> <p><strong>Relationships matter most</strong></p> <p>Aligning word embeddings for unsupervised machine translation isn’t a new concept. Recent work trains neural networks to match vectors directly in word embeddings, or matrices, from two languages together. But these methods require a lot of tweaking during training to get the alignments exactly right, which is inefficient and time consuming.</p> <p>Measuring and matching vectors based on relational distances, on the other hand, is a far more efficient method that doesn’t require much fine-tuning. No matter where word vectors fall in a given matrix, the relationship between the words, meaning their distances, will remain the same. For instance, the vector for “father” may fall in completely different areas in two matrices. But vectors for “father” and “mother” will most likely always be close together.</p> <p>“Those distances are invariant,” Alvarez-Melis says. “By looking at distance, and not the absolute positions of vectors, then you can skip the alignment and go directly to matching the correspondences between vectors.”</p> <p>That’s where Gromov-Wasserstein comes in handy. The technique has been used in computer science for, say, helping align image pixels in graphic design. But the metric seemed “tailor made” for word alignment, Alvarez-Melis says: “If there are points, or words, that are close together in one space, Gromov-Wasserstein is automatically going to try to find the corresponding cluster of points in the other space.”</p> <p>For training and testing, the researchers used a dataset of publicly available word embeddings, called FASTTEXT, with 110 language pairs. In these embeddings, and others, words that appear more and more frequently in similar contexts have closely matching vectors. “Mother” and “father” will usually be close together but both farther away from, say, “house.”</p> <p><strong>Providing a “soft translation”</strong></p> <p>The model notes vectors that are closely related yet different from the others, and assigns a probability that similarly distanced vectors in the other embedding will correspond. It’s kind of like a “soft translation,” Alvarez-Melis says, “because instead of just returning a single word translation, it tells you ‘this vector, or word, has a strong correspondence with this word, or words, in the other language.’”</p> <p>An example would be in the months of the year, which appear closely together in many languages. The model will see a cluster of 12 vectors that are clustered in one embedding and a remarkably similar cluster in the other embedding. “The model doesn’t know these are months,” Alvarez-Melis says. “It just knows there is a cluster of 12 points that aligns with a cluster of 12 points in the other language, but they’re different to the rest of the words, so they probably go together well. By finding these correspondences for each word, it then aligns the whole space simultaneously.”</p> <p>The researchers hope the work serves as a “feasibility check,” Jaakkola says, to apply Gromov-Wasserstein method to machine-translation systems to run faster, more efficiently, and gain access to many more languages.</p> <p>Additionally, a possible perk of the model is that it automatically produces a value that can be interpreted as quantifying, on a numerical scale, the similarity between languages. This may be useful for linguistics studies, the researchers say. The model calculates how distant all vectors are from one another in two embeddings, which depends on sentence structure and other factors. If vectors are all really close, they’ll score closer to 0, and the farther apart they are, the higher the score. Similar Romance languages such as French and Italian, for instance, score close to 1, while classic Chinese scores between 6 and 9 with other major languages.</p> <p>“This gives you a nice, simple number for how similar languages are … and can be used to draw insights about the relationships between languages,” Alvarez-Melis says.</p> The new model measures distances between words with similar meanings in “word embeddings,” and then aligns the words in both embeddings that are most closely correlated by relative distances, meaning they’re most likely to be direct translations of one another.Courtesy of the researchersResearch, Language, Machine learning, Artificial intelligence, Data, Algorithms, Computer science and technology, Computer Science and Artificial Intelligence Laboratory (CSAIL), IDSS, Electrical Engineering & Computer Science (eecs), School of Engineering Professor Emeritus Sylvain Bromberger, philosopher of language and science, dies at 94 Bromberger played pivotal role in establishing MIT Department of Linguistics and Philosophy. Mon, 22 Oct 2018 13:30:01 -0400 School of Humanities, Arts, and Social Sciences <p>Professor Emeritus Sylvain&nbsp;Bromberger, a philosopher of language and of science who played a pivotal role in establishing MIT’s Department of Linguistics and Philosophy, died on Sept. 16 in Cambridge, Massachusetts. He was 94.<br /> <br /> A faculty member for more than 50 years, Bromberger helped found the department in 1977 and headed the philosophy section for several years. He officially retired in 1993 but remained very active at MIT until his death.<br /> <br /> <strong>Kindness and intellectual generosity</strong><br /> <br /> “Although he officially retired 25 years ago, Sylvain was an active and valued member of the department up to the very end,” said Alex Byrne, head of the Department of Linguistics and Philosophy. “He made enduring contributions to philosophy and linguistics, and his colleagues and students were frequent beneficiaries of his kindness and intellectual generosity. He had an amazing life in so many ways, and MIT is all the better for having been a part of it.”<br /> <br /> Paul Egré, director of research at the French National Center for Scientific Research (aka CNRS) and a former visiting scholar at MIT, said, “Those of us who were lucky enough to know Sylvain have lost the dearest of friends, a unique voice, a distinctive smile and laugh, someone who yet seemed to know that life is vain and fragile in unsuspected ways, but also invaluable in others.”<br /> <br /> <strong>Enduring contribution to fundamental issues about knowledge</strong></p> <p>Bromberger’s work centered largely on fundamental issues in epistemology, namely the theory of knowledge and the conditions that make knowledge possible or impossible. During the course of his career, he devoted a substantial part of his thinking to an examination of the ways in which we come to apprehend unsolved questions. His research in the philosophy of linguistics, carried out in part with the late Institute Professor Morris Halle of the linguistics section, included investigations into the foundations of phonology and of morphology.<br /> <br /> Born in 1924 in Antwerp to a French-speaking Jewish family, Bromberger escaped the German invasion of Belgium with his parents and two brothers on May 10, 1940. After reaching Paris, then Bordeaux, his family obtained one of the last visas issued by the Portuguese consul Aristides de Sousa Mendes in Bayonne. Bromberger later dedicated the volume of his collected papers "On What We Know We Don’t Know: Explanation, Theory, Linguistics, and How Questions Shape Them" (University of Chicago Press, 1992) to Sousa Mendes.<br /> <br /> The family fled to New York, and Bromberger was admitted to Columbia University. However, he chose to join the U.S. Army in 1942, and he went on to serve three years in the infantry. He took part in the liberation of Europe as a member of the 405th Regiment, 102nd Infantry Division. He was wounded during the invasion of Germany in 1945.<br /> <br /> After leaving the Army, Bromberger studied physics and the philosophy of science at Columbia University, earning his bachelor’s degree in 1948. He received his PhD in philosophy from Harvard University in 1961.<br /> &nbsp;<br /> <strong>Research and teaching at MIT</strong><br /> <br /> He served on the philosophy faculties at Princeton University and at the University of Chicago before joining MIT in 1966. Over the years, he trained many generations of MIT students, teaching alongside such notables as Halle, Noam Chomsky, Thomas Kuhn, and Ken Hale.<br /> <br /> In the early part of his career, Bromberger focused on critiquing the so-called deductive-nomological model of explanation, which says that to explain a phenomenon is to deductively derive the statement reporting that phenomenon from laws (universal generalizations) and antecedent conditions. For example, we can explain that this water boils from the law that all water boils at 100 degrees C, and that the temperature of the water was elevated to exactly 100 C.<br /> <br /> <strong>An influential article: Why-questions </strong><br /> <br /> One simple though key observation made by Bromberger in his analysis was that we may not only explain that the water boils at 100 C, but also how it boils, and even why it boils when heated up. This feature gradually led Bromberger to think about the semantics and pragmatics of questions and their answers.<br /> <br /> Bromberger’s 1966 “Why-questions” paper was probably his most influential article. In it, he highlights the fact that most scientifically valid questions put us at first in a state in which we know all actual answers to the question to be false, but in which we can nevertheless recognize the question to have a correct answer (a state he calls “p-predicament,” with “p” for “puzzle”). According to Bromberger, why-questions are particularly emblematic of this state of p-predicament, because in order to ask a why-question rationally, a number of felicity conditions (or presuppositions) must be satisfied, which are discussed in his work.<br /> <br /> The paper had an influence on ulterior accounts of explanation, notably Bas van Fraassen’s discussion of the semantic theory of contrastivism in his book "The Scientific Image" (to explain a phenomenon is to answer a why-question with a contrast class in mind). Still today, why-questions are recognized as questions whose semantics is hard to specify, in part for reasons Bromberger discussed.<br /> <br /> In addition to investigating the syntactic, semantic, and pragmatic analysis of interrogatives, Bromberger also immersed himself in generative linguistics, with a particular interest in generative phonology, and the methodology of linguistic theory, teaching a seminar on the latter with Thomas Kuhn.<br /> <br /> <strong>A lifelong engagement with new ideas</strong><br /> <br /> In 1993, the MIT Press published a collection of essays in linguistics to honor Bromberger on the occasion of his retirement. "The View From Building 20," edited by Ken Hale and Jay Keyser, featured essays by Chomsky, Halle, Alec Marantz, and other distinguished colleagues.<br /> <br /> In 2017, Egré and Robert May put together a workshop honoring Bromberger at the Ecole Normale Supérieure in Paris. Talks there centered on themes from Bromberger’s work, including metacognition, questions, linguistic theory, and problems concerning word individuation.<br /> <br /> Tributes were read, notably this one from Chomsky, who used to take walks with Bromberger when they taught together:</p> <p>“Those walks were a high point of the day for many years … almost always leaving me with the same challenging question: Why? Which I’ve come to think of as Sylvain’s question. And leaving me with the understanding that it is a question we should always ask when we have surmounted some barrier in inquiry and think we have an answer, only to realize that we are like mountain climbers who think they see the peak but when they approach it find that it still lies tantalizingly beyond.”<br /> <br /> Egré noted that even when Bromberger was in his 90s, he had a “constant appetite for new ideas. He would always ask what your latest project was about, why it was interesting, and how you would deal with a specific problem,” Egré said. “His hope was that philosophy, linguistics, and the brain sciences would eventually join forces to uncover unprecedented dimensions of the human mind, erasing at least some of our ignorance.”<br /> <br /> Bromberger’s wife of 64 years, Nancy, died in 2014. He is survived by two sons, Allen and Daniel; and three grandchildren, Michael Barrows, Abigail Bromberger, and Eliza Bromberger.<br /> &nbsp;</p> <h5><br /> <em>Written by Paul Egré and Kathryn O’Neill, with contributions from Daniel Bromberger, Allen Bromberger, Samuel Jay Keyser, Robert May, Agustin Rayo, Philippe Schlenker, and Benjamin Spector</em><br /> &nbsp;</h5> "Sylvain made enduring contributions to philosophy and linguistics," said Alex Byrne, head of the MIT Department of Linguistics and Philosophy, "and his colleagues and students were frequent beneficiaries of his kindness and intellectual generosity. He had an amazing life in so many ways, and MIT is all the better for having been a part of it."Photo courtesy of the Bromberger family Philosophy, Faculty, Obituaries, Linguistics, School of Humanities Arts and Social Sciences Machine-learning system tackles speech and object recognition, all at once Model learns to pick out objects within an image, using spoken descriptions. Tue, 18 Sep 2018 00:00:00 -0400 Rob Matheson | MIT News Office <p>MIT computer scientists have developed a system that learns to identify objects within an image, based on a spoken description of the image. Given an image and an audio caption, the model will highlight in real-time the relevant regions of the image being described.</p> <p>Unlike current speech-recognition technologies, the model doesn’t require manual transcriptions and annotations of the examples it’s trained on. Instead, it learns words directly from recorded speech clips and objects in raw images, and associates them with one another.</p> <p>The model can currently recognize only several hundred different words and object types. But the researchers hope that one day their combined speech-object recognition technique could save countless hours of manual labor and open new doors in speech and image recognition.</p> <p>Speech-recognition systems such as Siri, for instance, require transcriptions of many thousands of hours of speech recordings. Using these data, the systems learn to map speech signals with specific words. Such an approach becomes especially problematic when, say, new terms enter our lexicon, and the systems must be retrained.</p> <p>“We wanted to do speech recognition in a way that’s more natural, leveraging additional signals and information that humans have the benefit of using, but that machine learning algorithms don’t typically have access to. We got the idea of training a model in a manner similar to walking a child through the world and narrating what you’re seeing,” says David Harwath, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Spoken Language Systems Group. Harwath co-authored a paper describing the model that was presented at the recent European Conference on Computer Vision.</p> <p>In the paper, the researchers demonstrate their model on an image of a young girl with blonde hair and blue eyes, wearing a blue dress, with a white lighthouse with a red roof in the background. The model learned to associate which pixels in the image corresponded with the words “girl,” “blonde hair,” “blue eyes,” “blue dress,” “white light house,” and “red roof.” When an audio caption was narrated, the model then highlighted each of those objects in the image as they were described.</p> <p>One promising application is learning translations between different languages, without need of a bilingual annotator. Of the estimated 7,000 languages spoken worldwide, only 100 or so have enough transcription data for speech recognition. Consider, however, a situation where two different-language speakers describe the same image. If the model learns speech signals from language A that correspond to objects in the image, and learns the signals in language B that correspond to those same objects, it could assume those two signals — and matching words — are translations of one another.</p> <p>“There’s potential there for a Babel Fish-type of mechanism,” Harwath says, referring to the fictitious living earpiece in the “Hitchhiker’s Guide to the Galaxy” novels that translates different languages to the wearer.</p> <p>The CSAIL co-authors are: graduate student Adria Recasens; visiting student Didac Suris; former researcher Galen Chuang; Antonio Torralba, a professor of electrical engineering and computer science who also heads the MIT-IBM Watson AI Lab; and Senior Research Scientist James Glass, who leads the Spoken Language Systems Group at CSAIL.</p> <p><strong>Audio-visual associations</strong></p> <p>This work expands on an earlier model developed by Harwath, Glass, and Torralba that correlates speech with groups of thematically related images. In the earlier research, they put images of scenes from a classification <a href="">database</a> on the crowdsourcing Mechanical Turk platform. They then had people describe the images as if they were narrating to a child, for about 10 seconds. They compiled more than 200,000 pairs of images and audio captions, in hundreds of different categories, such as beaches, shopping malls, city streets, and bedrooms.</p> <p>They then designed a model consisting of two separate convolutional neural networks (CNNs). One processes images, and one processes spectrograms, a visual representation of audio signals as they vary over time. The highest layer of the model computes outputs of the two networks and maps the speech patterns with image data.</p> <p>The researchers would, for instance, feed the model caption A and image A, which is correct. Then, they would feed it a random caption B with image A, which is an incorrect pairing. After comparing thousands of wrong captions with image A, the model learns the speech signals corresponding with image A, and associates those signals with words in the captions. As described in a 2016 <a href="">study</a>, the model learned, for instance, to pick out the signal corresponding to the word “water,” and to retrieve images with bodies of water.</p> <p>“But it didn’t provide a way to say, ‘This is exact point in time that somebody said a specific word that refers to that specific patch of pixels,’” Harwath says.</p> <p><strong>Making a matchmap</strong></p> <p>In the new paper, the researchers modified the model to associate specific words with specific patches of pixels. The researchers trained the model on the same database, but with a new total of 400,000 image-captions pairs. They held out 1,000 random pairs for testing.</p> <p>In training, the model is similarly given correct and incorrect images and captions. But this time, the image-analyzing CNN divides the image into a grid of cells consisting of patches of pixels. The audio-analyzing CNN divides the spectrogram into segments of, say, one second to capture a word or two.</p> <p>With the correct image and caption pair, the model matches the first cell of the grid to the first segment of audio, then matches that same cell with the second segment of audio, and so on, all the way through each grid cell and across all time segments. For each cell and audio segment, it provides a similarity score, depending on how closely the signal corresponds to the object.</p> <p>The challenge is that, during training, the model doesn’t have access to any true alignment information between the speech and the image. “The biggest contribution of the paper,” Harwath says, “is demonstrating that these cross-modal [audio and visual] alignments can be inferred automatically by simply teaching the network which images and captions belong together and which pairs don’t.”</p> <p>The authors dub this automatic-learning association between a spoken caption’s waveform with the image pixels a “matchmap.” After training on thousands of image-caption pairs, the network narrows down those alignments to specific words representing specific objects in that matchmap.</p> <p>“It’s kind of like the Big Bang, where matter was really dispersed, but then coalesced into planets and stars,” Harwath says. “Predictions start dispersed everywhere but, as you go through training, they converge into an alignment that represents meaningful semantic groundings between spoken words and visual objects.”</p> <p>“It is exciting to see that neural methods are now also able to associate image elements with audio segments, without requiring text as an intermediary,” says Florian Metze, an associate research professor at the Language Technologies Institute at Carnegie Mellon University. “This is not human-like learning; it’s based entirely on correlations, without any feedback, but it might help us understand how shared representations might be formed from audio and visual cues. ... [M]achine [language] translation is an application, but it could also be used in documentation of endangered languages (if the data requirements can be brought down). One could also think about speech recognition for non-mainstream use cases, such as people with disabilities and children.”</p> MIT computer scientists have developed a system that learns to identify objects within an image, based on a spoken description of the image.Image: Christine DaniloffResearch, Computer science and technology, Language, Machine learning, Artificial intelligence, Data, Algorithms, Computer vision, Electrical engineering and computer science (EECS), Computer Science and Artificial Intelligence Laboratory (CSAIL), School of Engineering, MIT-IBM Watson AI Lab Civil rights in a complex world Professor Bruno Perreau examines the relationships between personal identity and public institutions. Wed, 12 Sep 2018 00:00:00 -0400 Peter Dizikes | MIT News Office <p>For as long as he can remember, Bruno Perreau hoped to teach others.</p> <p>“Being a teacher was something I wanted from the youngest age,” says Perreau, recalling his childhood in France. That wish has come true: Perreau taught for a decade in the French university system and is now the Cynthia L. Reed Associate Professor of French Studies and Language at MIT.</p> <p>But Perreau is also an accomplished researcher with two well-received books to his name in English, and several other books and edited volumes to his credit in French.&nbsp;In France, he worked as an activist while entering academia. As an intellectual he has weighed in on public debates, especially those involving adoption policy and gay rights.&nbsp;</p> <p>In short, Perreau is many things at once: teacher, scholar, author, public commentator. That seems fitting, because, as Perreau wrote in one of his books, people tend to “combine several types and levels of identity” in modern life. Indeed, much of Perreau’s work is about how personal identity interacts with states and institutions.</p> <p>For instance, Perreau’s 2014 book, “The Politics of Adoption,” examined restrictive policies that limited adoption rights in France for much of the postwar era, and his 2017 book “Queer Theory: The French Response” was a close look at the intellectual landscape surrounding France’s 2013 law that opened marriage and adoption to gay couples.</p> <p>In that case, as Perreau wrote in the book, states should recognize “the multipositional nature of minorities” in society. As he notes, one can be, for instance, gay, black, and a parent at the same time, and such identities vary considerably, depending on the person and circumstances. Accounting for such considerations may seem straightforward, but in practice can be a major challenge for the law.</p> <p>To answer this challenge, France has long propounded a blanket universalism that formally downplays social differences. In some ways, this has helped France establish principles of equality. In others, this “logic of unity,” as Perreau calls it, has made it harder for the French to construct equal rights while explicitly acknowledging particular social differences.</p> <p>On the other hand, the institution of academia is often structured to let people take on multiple roles at once. Thus, for his scholarship, teaching, public engagement, and more, MIT granted Perreau tenure in 2017.</p> <p>“MIT is extremely flexible and supportive,” Perreau says.</p> <p><strong>“They invested everything in me”</strong></p> <p>Perreau grew up in Chalon-sur-Saône, France, in the southern part of Burgundy. His mother was a teacher. Growing up, Perreau was one of the few children in an extended family of modest means, and received strong moral support from relatives who wanted him to succeed.&nbsp;&nbsp;</p> <p>“They didn’t have much social capital, but they invested everything in me,” Perreau says. “And they tell you, ‘You’re going to do great things,’ and you end up believing it. And you don’t want to disappoint them.”</p> <p>Meanwhile, Perreau found that as a good student, classroom success had unexpected benefits.</p> <p>“As a kid, when I was 7, 8, 9 years old, I would be asked by younger kids to teach them things in school,” Perreau says. “When I was 11 years old, at night, the other kids would call me and say, ‘Can you tell me, how should I understand this math problem?’ It was great for me. I really enjoyed doing this. It gave me a social identity that made me a little different, a little more likeable. … It was probably a way for me to avoid some bullying as an effeminate boy. I partially managed to avoid that, but not fully.”</p> <p>Indeed, as Perreau sees it, this helped shape his long-term identity: He could become a strong student without acute worries about conforming to the crowd. As such, he kept earning scholarships that took him through school, then to college at the Institute of Political Studies (Sciences Po) in Lyon, and to a master’s program at Loughborough University in England.</p> <p>At the time, Perreau was studying political institutions broadly. “How institutions work, how they help us, because we don’t have to redefine the rules of the game every time, is fascinating to me,” he says. Before he continued his studies, though, Perreau moved to Paris and became an LGBT activist, and instantly liked it.</p> <p>“Suddenly I was surrounded by people involved in the same struggle,” Perreau says. But he also decided to pursue a PhD, at the University of Paris 1, the Panthéon-Sorbonne. There, Perreau took an inspiring class on the history of political ideas in relation to parity laws, gender, and more, taught by Evelyne Pisier, which steered him in a new direction.</p> <p>“It was a total revelation for me because I had no idea that my own personal experience, combined with activism, could also resonate in the university system,” Perreau says. Closely examining how institutions and forms of civil rights evolve, Perreau got his doctorate and took a job at Sciences Po Paris, until he joined MIT in 2010.</p> <p><strong>Interesting times</strong></p> <p>Perreau’s activism and scholarship have continued to intersect as his career has unfolded. His activist work — combined with the efforts of many others, he notes — helped open up the discussion that ultimately led to France’s 2013 law on civil unions. This has made him exceptionally well-placed to write about the subject and to comment publicly about state policy, in newspaper opinion pieces and on television and radio.</p> <p>Whatever progress that law represents, Perreau is hardly complacent about it. In “Queer Theory: The French Response,” he elaborates on the idea that citizenship itself does not come from universal norms. Rather, Perreau writes, “a feeling of belonging stems from a challenge to, rather than a sanctification of, the social order.”</p> <p>At any given time, a legal code will not “fully grasp reality,” as he puts it. A challenge for citizens, then, is to identify the mismatches between laws, norms, and complex reality. In this sense citizenship exists in a state of tension with the established order, not by conforming to it.</p> <p>Currently this idea is the center of one of the books Perreau is now working on, about “minority democracy.” By that, Perreau says, he means “how we can develop systems of representation and presence for minorities in the public space that do not require them to abandon who they are.”</p> <p>If citizenship is about seeking to improve our systems of governing, then Perreau’s work helps identify him another way: as a citizen. It is one more descriptor to add to the list, along with activist, scholar, and, yes, teacher.</p> “Being a teacher was something I wanted from the youngest age,” says Cynthia L. Reed Associate Professor of French Studies and Language Bruno Perreau.Image: Bryce VickmarkGlobal Studies and Languages, France, Faculty, Profile, Lesbian, gay, bisexual, transgender, queer/questioning (LGBTQ), School of Humanities Arts and Social Sciences, Language, SHASS How music lessons can improve language skills Study links piano education with better word discrimination by kindergartners. Mon, 25 Jun 2018 14:59:59 -0400 Anne Trafton | MIT News Office <p>Many studies have shown that musical training can enhance language skills. However, it was unknown whether music lessons improve general cognitive ability, leading to better language proficiency, or if the effect of music is more specific to language processing.</p> <p>A new study from MIT has found that piano lessons have a very specific effect on kindergartners’ ability to distinguish different pitches, which translates into an improvement in discriminating between spoken words. However, the piano lessons did not appear to confer any benefit for overall cognitive ability, as measured by IQ, attention span, and working memory.</p> <p>“The children didn’t differ in the more broad cognitive measures, but they did show some improvements in word discrimination, particularly for consonants. The piano group showed the best improvement there,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and the senior author of the paper.</p> <p>The study, performed in Beijing, suggests that musical training is at least as beneficial in improving language skills, and possibly more beneficial, than offering children extra reading lessons. The school where the study was performed has continued to offer piano lessons to students, and the researchers hope their findings could encourage other schools to keep or enhance their music offerings.</p> <p>Yun Nan, an associate professor at Beijing Normal University, is the lead author of the study, which appears in the <em>Proceedings of the National Academy of Sciences</em> the week of June 25.</p> <p>Other authors include Li Liu, Hua Shu, and Qi Dong, all of Beijing Normal University; Eveline Geiser, a former MIT research scientist; Chen-Chen Gong, an MIT research associate; and John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.</p> <p><strong>Benefits of music</strong></p> <p>Previous studies have shown that on average, musicians perform better than nonmusicians on tasks such as reading comprehension, distinguishing speech from background noise, and rapid auditory processing. However, most of these studies have been done by asking people about their past musical training. The MIT researchers wanted to perform a more controlled study in which they could randomly assign children to receive music lessons or not, and then measure the effects.</p> <p>They decided to perform the study at a school in Beijing, along with researchers from the IDG/McGovern Institute at Beijing Normal University, in part because education officials there were interested in studying the value of music education versus additional reading instruction.</p> <p>“If children who received music training did as well or better than children who received additional academic instruction, that could a justification for why schools might want to continue to fund music,” Desimone says.</p> <p>The 74 children participating in the study were divided into three groups: one that received 45-minute piano lessons three times a week; one that received extra reading instruction for the same period of time; and one that received neither intervention. All children were 4 or 5 years old and spoke Mandarin as their native language.</p> <p>After six months, the researchers tested the children on their ability to discriminate words based on differences in vowels, consonants, or tone (many Mandarin words differ only in tone). Better word discrimination usually corresponds with better phonological awareness — the awareness of the sound structure of words, which is a key component of learning to read.</p> <p>Children who had piano lessons showed a significant advantage over children in the extra reading group in discriminating between words that differ by one consonant. Children in both the piano group and extra reading group performed better than children who received neither intervention when it came to discriminating words based on vowel differences.</p> <p>The researchers also used electroencephalography (EEG) to measure brain activity and found that children in the piano group had stronger responses than the other children when they listened to a series of tones of different pitch. This suggest that a greater sensitivity to pitch differences is what helped the children who took piano lessons to better distinguish different words, Desimone says.</p> <p>“That’s a big thing for kids in learning language: being able to hear the differences between words,” he says. “They really did benefit from that.”</p> <p>In tests of IQ, attention, and working memory, the researchers did not find any significant differences among the three groups of children, suggesting that the piano lessons did not confer any improvement on overall cognitive function.&nbsp;</p> <p>Aniruddh Patel, a professor of psychology at Tufts University, says the findings also address the important question of whether purely instrumental musical training can enhance speech processing.</p> <p>“This study answers the question in the affirmative, with an elegant design that directly compares the effect of music and language instruction on young children. The work specifically relates behavioral improvements in speech perception to the neural impact of musical training, which has both theoretical and real-world significance,” says Patel, who was not involved in the research.</p> <p><strong>Educational payoff</strong></p> <p>Desimone says he hopes the findings will help to convince education officials who are considering abandoning music classes in schools not to do so.</p> <p>“There are positive benefits to piano education in young kids, and it looks like for recognizing differences between sounds including speech sounds, it’s better than extra reading. That means schools could invest in music and there will be generalization to speech sounds,” Desimone says. “It’s not worse than giving extra reading to the kids, which is probably what many schools are tempted to do — get rid of the arts education and just have more reading.”</p> <p>Desimone now hopes to delve further into the neurological changes caused by music training. One way to do that is to perform EEG tests before and after a single intense music lesson to see how the brain’s activity has been altered.</p> <p>The research was funded by the National Natural Science Foundation of China, the Beijing Municipal Science and Technology Commission, the Interdiscipline Research Funds of Beijing Normal University, and the Fundamental Research Funds for the Central Universities.</p> A new study from MIT has found that piano lessons have a very specific effect on kindergartners’ ability to distinguish different pitches, which translates into an improvement in discriminating between spoken words.Research, Language, Music, Brain and cognitive sciences, McGovern Institute, School of Science QS ranks MIT the world’s No. 1 university for 2018-19 Ranked at the top for the seventh straight year, the Institute also places first in 12 of 48 disciplines. Wed, 06 Jun 2018 16:00:00 -0400 MIT News Office <p>For the seventh year in a row MIT has topped the QS World University Rankings, which were announced today.</p> <p>The full 2018-19 rankings — published by Quacquarelli Symonds, an organization specializing in education and study abroad — can be found at <a href=""></a>. The QS rankings were based on academic reputation, employer reputation, citations per faculty, student-to-faculty ratio, proportion of international faculty, and proportion of international students. MIT earned a perfect overall score of 100.</p> <p>MIT was also ranked the world’s top university in <a href="">12 of 48 disciplines ranked by QS</a>, as announced in February of this year.</p> <p>MIT received a No. 1 ranking in the following QS subject areas: Architecture/Built Environment; Linguistics; Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Electrical and Electronic Engineering; Mechanical, Aeronautical and Manufacturing Engineering; Chemistry; Materials Science; Mathematics; Physics and Astronomy; and Statistics and Operational Research. &nbsp;&nbsp;</p> <p>Additional high-ranking MIT subjects include: Art and Design (No. 4), Biological Sciences (No. 2), Earth and Marine Sciences (No. 3), Environmental Sciences (No. 3), Accounting and Finance (No. 2), Business and Management Studies (No. 4), and Economics and Econometrics (No. 2).</p> Photo: AboveSummit with Christopher HartingRankings, Architecture, Chemical engineering, Chemistry, Civil and environmental engineering, Electrical Engineering & Computer Science (eecs), Economics, Linguistics, Materials Science and Engineering, DMSE, Mechanical engineering, Aeronautical and astronautical engineering, Physics, Business and management, Accounting, Finance, Arts, Design, Mathematics, EAPS, School of Architecture and Planning, School of Humanities Arts and Social Sciences, School of Science, School of Engineering, Sloan School of Management Gauging language proficiency through eye movement Study tracks eye movement to determine how well people understand English as a foreign language. Tue, 22 May 2018 23:59:59 -0400 Peter Dizikes | MIT News Office <p>A study by MIT researchers has uncovered a new way of telling how well people are learning English: tracking their eyes.</p> <p>That’s right. Using data generated by cameras trained on readers’ eyes, the research team has found that patterns of eye movement — particularly how long&nbsp; people’s eyes rest on certain words — correlate strongly with performance on standardized tests of English as a second language.&nbsp;</p> <p>“To a large extent [eye movement] captures linguistic proficiency, as we can measure it against benchmarks of standardized tests,” says Yevgeni Berzak, a postdoc in MIT’s Department of Brain and Cognitive Sciences (BCS) and co-author of a new paper outlining the research. He adds: “The signal of eye movement during reading is very rich and very informative.”</p> <p>Indeed, the researchers even suggest the new method has potential use as a testing tool. “It has real potential applications,” says Roger Levy, an associate professor in BCS and another of the study’s co-authors.&nbsp;</p> <p>The paper, “Assessing Language Proficiency from Eye Movements in Reading,” is being published in the Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. The authors are Berzak, a postdoc in the Computational Psycholinguistics Group in BCS; Boris Katz, a principal research scientist and head of the InfoLab Group at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL); and Levy, who also directs the Computational Psycholinguistics Lab in BCS.</p> <p><strong>The illusion of continuity</strong></p> <p>The study delves into a phenomenon about reading that we may never notice, no matter how much we read: Our eyes do not move continuously along a string of text, but instead fix on particular words for up to 200 to 250 milliseconds. We also take leaps from one word to another that may last about 1/20 of a second.</p> <p>“Although you have a subjective experience of a continuous, smooth pass over text, that’s absolutely not what your eyes are doing,” says Levy. “Your eyes are jumping around, mostly forward, sometimes backward. Your mind stitches together a smooth experience. … It’s a testimony to the ability of the mind to create illusions.”</p> <p>But if you are learning a new language, your eyes may dwell on particular words for longer periods of time, as you try to comprehend the text. The particular pattern of eye movement, for this reason, can reveal a lot about comprehension, at least when analyzed in a clearly defined context.</p> <p>To conduct the study, the researchers used a dataset of eye movement records from work conducted by Berzak. The dataset has 145 students of English as a second language, divided almost evenly among four native languages — Chinese, Japanese, Portuguese, and Spanish — as well as 37 native English speakers.</p> <p>The readers were given 156 sentences to read, half of which were part of a “fixed test” in which everyone in the study read the same sentences. The video footage enabled the research team to focus intensively on a series of duration times — the length of time readers were fixated on particular words.</p> <p>The research team called the set of metrics they used the “EyeScore.” After evaluating how it correlated with the Michigan English Test (MET) and the Test of English as a Foreign Language (TOEFL), they concluded in the paper that the EyeScore method produced “competitive results” with the standardized tests, “further strengthening the evidence for the ability of our approach to capture language proficiency.”</p> <p>As a result, the authors write, the new method is “the first proof of concept for a system which utilizes eye tracking to measure linguistic ability.”</p> <p><strong>Sentence by sentence</strong></p> <p>Other scholars say the study is an interesting addition to the research literature on the subject.</p> <p>“The method [used in the study] is very innovative and — in my opinion — holds much promise for using eye-tracking technology to its full potential,” says Erik Reichle, head of the Department of Psychology at Macquarie University in Sydney, Australia, who has conducted many experiments about tracking eye movement. Reichle adds that he suspects the paper “will have a big impact in a number of different fields, including those more directly related to second-language learning.” &nbsp; &nbsp;</p> <p>As the researchers see it, the current study is just one step on a longer journey of exploration about the interactions of language and cognition.</p> <p>As Katz says, “The bigger question is, how does language affect your brain?” Given that we only began processing written text within the last several thousand years, he notes, our reading ability is an example of the “amazing plasticity” of the brain. Before too long, he adds, “We could actually be in a position to start answering these questions.”</p> <p>Levy, for his part, thinks that it may be possible to make these eye tests about reading more specific. Rather than evaluating reader comprehension over a corpus of 156 sentences, as the current study did, experts might be able to render more definitive judgments about even smaller strings of text.</p> <p>“One thing that we would hope to do in the future that we haven’t done yet, for example, is ask, on a sentence-by-sentence basis, to what extent can we tell how well you understood a sentence by the eye movements you made when you read it,” Levy says. “That’s an open question nobody’s answered. We hope we might be able to do that in the future.”</p> <p>The study was supported, in part, by MIT’s Center for Brains, Minds, and Machines, through a National Science Foundation grant.</p> A study by MIT researchers has uncovered a new way of telling how well people are learning English: tracking their eyes. Image: Christine Daniloff/MITBrain and cognitive sciences, Linguistics, Research, Foreign languages and literatures, School of Science, Computer Science and Artificial Intelligence Laboratory (CSAIL), National Science Foundation (NSF) William Rodríguez: Helping others broaden their horizons MIT senior and Model UN leader William Rodríguez works to encourage the global exchange of ideas. Sun, 20 May 2018 23:59:59 -0400 Fatima Husain | MIT News correspondent <p>William Rodríguez grew up resetting the family router and fixing all things technological in his home in San Juan, Puerto Rico. When the self-described computer junkie began to look at colleges, he knew that MIT was the right fit.</p> <p>“I’ve always been interested in technology and in the different ways in which you can make people’s lives better through [technological] tools,” Rodríguez says. “MIT had that spirit of using technology and underscoring the importance of innovation.”</p> <p>Once he arrived in Cambridge, Rodríguez followed his passion for computer science and majored in electrical engineering and computer science. After taking 14.73 (The Challenge of World Poverty) taught by Esther Duflo, the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics, Rodríguez decided to declare a minor in economics.</p> <p>“I learned that economics was basically applying science to humans, to the systems we create, and to the ways we think,” Rodríguez says. “I enjoy engineering, but I also really enjoy the humanities and social sciences, so this pairing came naturally to me.”</p> <p>Rodriguez’s dual interests in technology and in his fellow humans have also merged in his many extracurricular activities at MIT. Whether teaching entrepreneurship in Brazil, volunteering on campus, or helping to lead MIT’s Model UN organization, he is happiest when broadening his horizons and helping others do the same.&nbsp;</p> <p><strong>Transcending languages</strong></p> <p>Before coming to MIT, Rodríguez studied French for five years. Determined to keep practicing the language, he studied conversational French during his first year at the Institute. Intending to learn more languages on top of English, Spanish, and French, Rodríguez scanned the language course offerings and enrolled in Portuguese for Spanish speakers. After practicing his Portuguese throughout the semester, he decided to pursue a summer internship in Brazil through MIT International Science and Technology Initiatives (MISTI).</p> <p>Rodríguez interned at Take.NET, a mobile technology company based in Belo Horizonte, Brazil. At Take.NET, he designed the marketing plan for the company’s new artificially intelligent chatbot. But that’s not all Rodríguez accomplished.</p> <p>“There, I basically became fluent in the language. It was really funny — in the host family, the mother was trying to learn English, the father was trying to learn Spanish, and I was trying to learn Portuguese.” Rodríguez says. “We really complemented each other, and I became immersed in the language and culture.”</p> <p>Rodríguez’s positive internship experience led him back to Brazil during his senior year Independent Activities Period (IAP) — this time to São Paulo to teach through the MIT Global Teaching Labs. Along with Massachusetts-area graduate and undergraduate students, Rodríguez served as a teaching assistant for a class on entrepreneurship. The class consisted of a series of entrepreneurship workshops targeted to approximately 100 Brazilian students, graduates, and professionals, covering topics “from the basics of choosing an idea and bringing an idea to market.”</p> <p>In the future, Rodríguez says he could envision a future for himself in entrepreneurship.</p> <p>“I believe it’s one of my long-term goals,” Rodríguez says. “I think it’s very powerful to choose an idea that you really believe in to help people in some way, and to have the tenacity, the perseverance, to build a team, lead a team, and bring the idea to market.”</p> <p>But, before that, Rodríguez wants to learn at least two more languages, beginning with Japanese or Mandarin.</p> <p><strong>Service for all</strong></p> <p>Before beginning his first semester at MIT, Rodríguez participated in the MIT Office of Minority Education’s Interphase EDGE program, a two-year scholarship program that begins in the summer before students arrive at MIT and helps ease the academic and social transition from high school to college.</p> <p>After students complete the program, they have the opportunity to serve as associate advisors for incoming students. As an associate advisor, Rodríguez checks in with a cohort of 16 students throughout their first few semesters at MIT, sometimes recommending classes, decoding internship applications, or lending a sympathetic ear when students need to talk.</p> <p>&nbsp;“I really valued the relationships I built, so I saw [advising] as a way to give back to the new students who were coming in through the program,” Rodríguez says.</p> <p>Rodríguez has also helped organize the MIT-Harvard Relay for Life since his first year, and has served as vice president of team management, and later, director. In 2017, a joint MIT-Harvard Relay for Life event raised $50,000 for the American Cancer Society.</p> <p>After Hurricanes Irma and Maria swept across Puerto Rico, Rodríguez teamed with fellow senior Gabriel Ginorio to organize a three-day donation drive in October through the MIT Association of Puerto Rican Students. The money and supplies gathered through the drive served over 5,000 Puerto Rican residents.</p> <p>For his work in public service, Rodríguez <a href="">was recognized</a> as an MIT Distinguished Peer in Public Service this past October.</p> <p><strong>Developing global literacy</strong></p> <p>During high school, Rodríguez was an active member in Model United Nations, an international student organization that simulates the workings of the United Nations assembly. The experience sparked an interest in international affairs and policy that he wanted to continue during his time at MIT.</p> <p>As part of the MIT Model United Nations Conference, Rodríguez now plans the competitions in which he once participated for high school students around the world. In the competition, students propose “tangible solutions that can ameliorate an issue or improve certain situations in different place around the world.”</p> <p>Part of planning the competitions involves generating the prompts students compete with in competition. At MIT’s annual Model United Nations competition, students have worked on a wide range of issues, ranging from “the monetary challenge of cryptocurrency, nuclear power as a viable source of energy, and how governments tackle climate change.”</p> <p>Rodríguez served as chief operating officer of the student organization during his sophomore year, and as president during his junior year. Eager to bring the competition to the international stage, he founded the first international chapter of the conference, which was held in Shanghai, China, this past year.</p> <p>“I really value the mission [of Model United Nations],” Rodríguez says. “You want to broaden the horizons so that students are exposed to problems and issues that they might never encounter in a typical classroom setting — problems involving countries and cultures different from their own.”</p> <p>This coming August, Rodríguez plans to travel back to Shanghai for the second international conference. His involvement with international policy may not end there. “I’ve considered going into roles in economic development research and policy, whether it be a role in the United Nations or a government,” he says. “This could be the scope of things I’d be happy being involved with.”&nbsp;</p> “I’ve always been interested in technology and in the different ways in which you can make people’s lives better through [technological] tools,” Rodríguez says.Image: Ian MacLellanProfile, Students, Undergraduate, Electrical Engineering & Computer Science (eecs), Contests and academic competitions, Economics, Global, Independent Activities Period, International development, Language, Leadership, MISTI, Brazil, School of Science, School of Humanities Arts and Social Sciences J-WEL names spring 2018 grant recipients Education Innovation Grant program for pK-12 and higher education awards $400,000 to MIT faculty to support education innovation both at MIT and globally. Mon, 07 May 2018 14:15:00 -0400 Danielle Pagano | Abdul Latif Jameel World Education Lab (J-WEL) <p>The Abdul Latif Jameel World Education Lab (J-WEL) at MIT has selected 10 projects to receive grants as part of its program to support educational innovation. J-WEL grants support initiatives that impact MIT education, with the broad potential for impact in global settings. They are awarded bi-annually to MIT faculty, with spring and fall rounds.</p> <p><strong>Grant recipients, pK-12 projects</strong></p> <p>J-WEL Grants in pK-12 Education Innovation were awarded to the following projects:</p> <p>"Teacher Practice Spaces for Equity Teaching Practices" — Justin Reich, professor of comparative media studies/writing. Equity Teaching Practices are classroom strategies that counter the pernicious effects of structural inequality. Reich’s team will use their simulation platform, TeacherMoments, to help teachers from all disciplines, with particular emphasis on STEM fields, rehearse for and reflect on these practices.</p> <p>"Tailoring STEM for Girls with Social Impact: Curricula, Self-Efficacy Change and Factors of Success in Multi-Week Interventions" — David Wallace, professor of mechanical engineering, and Larissa&nbsp;Nietner, postdoc in the Department of Mechanical Engineering. To increase their participation in STEM, girls need to experience STEM content as socially impactful. This project will develop and test adequate curricula, materials, and generalizable principles, which can be shared and transferred between schools across both the U.S. and the developing world.</p> <p>"High School Global STEM Project-Based Learning, Leveraging MIT BLOSSOMS" — Richard Larson, the Mitsui Professor of Data, Systems, and Society, and Dan Frey, professor of mechanical engineering. This project will leverage MIT BLOSSOMS, a resource library where educators can find teaching materials, to create and evaluate compelling project-based learning (PBL) lesson plans for secondary-school STEM teachers and students. Working with MIT students and selected educational partners, the team will utilize existing BLOSSOMS lessons well-suited for PBL follow-up.</p> <p>"The Compassionate Systems Framework and Network Development" — Peter Senge and Mette Miriam Boell, J-WEL. In 2016, Peter Senge and Mette Miriam Boell began working with the international baccalaureate (IB) network to develop and prototype a “Compassionate Systems Framework,” connecting systems thinking with mindfulness practices and social-emotional learning across the pK-12 spectrum. In this project, their team will assess impact and identify best practices that can be extended beyond the IB.</p> <p>"XRoads: Building Educator Capacity in XR" — Patty Maes, professor of media arts and sciences, and Eric Klopfer, professor of comparative media studies/writing. Virtual reality (VR), augmented reality (AR), and mixed reality (MR) — collectively known as “XR” — have great potential as educational tools, but few attempts have been made to integrate educators into the design and delivery of relevant experiences. Building upon their work in the Education Arcade and the MIT Media Lab's Fluid Interfaces group, research scientists Meredith Thompson and Scott W. Greenwald will work closely with teachers to adapt their work in room-scale VR for K-12 STEAM contexts and pilot the experiences with teachers and students.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p> <p>"Modular Curriculum With Hands-On, Low-Cost Biology Educational Activities" — Jim Collins, the Termeer Professor of Medical Engineering and Science and professor of biological engineering. Collins and his team, led by biological engineering PhD student Ally Huang, previously&nbsp;developed a hands-on, low-cost synthetic biology educational&nbsp;kit based on&nbsp;freeze-dried cell-free reactions, which demonstrate biological concepts in an&nbsp;engaging manner. This project will develop a database of modular lessons using&nbsp;these&nbsp;activities, allowing educators to create their own curriculum suited for&nbsp;their students’ needs.&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</p> <p><strong>Grant recipients, higher education projects</strong></p> <p>The four higher education recipients for the spring 2018 grant round are:</p> <p>"Technology Design for Coffee Production in Colombia: A Co-Design Experience" — Dan Frey, professor of mechanical engineering. Frey, PhD student Pedro Reynolds-Cuéllar, and their team will develop an Independent Activities Period course for both graduate and undergraduate students from all five MIT schools. Students in the course will co-create or re-design, along with Colombian coffee farmers, technologies for different stages of the coffee production process in Colombia in the context of climate change adaptation.</p> <p>"Advancing Socially-Directed STEM Education" —<strong> </strong>Christine Ortiz, the Morris Cohen Professor of Materials Science and Engineering. This project will focus on the development of course materials to be implemented in the MIT fall term class 3.087 (Materials, Societal Impact, and Social Innovation). The materials will not only be applied within MIT, but also to historically marginalized and underserved students nationwide and globally through a new nonprofit educational organization, Station1, founded by Ortiz.</p> <p>"'Social IT Solutions' Workshop in Tanzania" — Lisa Parks, professor of comparative media studies. The “Social IT Solutions” workshop will equip computer science students at the Dar es Salaam Institute of Technology (DIT) and the State University of Zanzibar (SUZA) with interdisciplinary knowledge and skills in the areas of information communication technologies for development, digital media, and design learning. The MIT team, which will include four students, will work alongside faculty from DIT and SUZA to facilitate a two-week workshop for Tanzanian computer science students.</p> <p>"Skicinuwi-npisun: A Community-Centered Project for Documentation and Teaching of the Passamaquoddy Language" — Norvin Richards, professor of linguistics. This project supports the work of linguistics graduate student Newell Lewey, from the Passamaquoddy nation of northern Maine, to support language teaching and curriculum development to help preserve the severely endangered Passamaquoddy language. It also provides funds for MIT linguists to work with the remaining speakers of the language, both to help with the creation of pedagogical materials and to further understanding of the grammar of the language. Copies of all records and materials will be provided to the Passamaquoddy tribe, as well as being archived at MIT.</p> Colombian farmers demonstrate the coffee-growing process to D-Lab students.Image courtesy of MIT D-LabAbdul Latif Jameel World Education Lab (J-WEL), Awards, honors and fellowships, Education, teaching, academics, Linguistics, Global, International initiatives, Women in STEM, Funding, Grants, Comparative Media Studies/Writing, Mechanical engineering, Biological engineering, DMSE, IDSS, Media Lab, School of Humanities Arts and Social Sciences, School of Engineering Cognitive scientists define critical period for learning language Study shows children remain adept learners until the age of 17 or 18. Tue, 01 May 2018 04:59:59 -0400 Anne Trafton | MIT News Office <p>A great deal of evidence suggests that it is more difficult to learn a new language as an adult than as a child, which has led scientists to propose that there is a “critical period” for language learning. However, the length of this period and its underlying causes remain unknown.</p> <p>A new study performed at MIT suggests that children remain very skilled at learning the grammar of a new language much longer than expected — up to the age of 17 or 18. However, the study also found that it is nearly impossible for people to achieve proficiency similar to that of a native speaker unless they start learning a language by the age of 10.</p> <p>“If you want to have native-like knowledge of English grammar you should start by about 10 years old. We don’t see very much difference between people who start at birth and people who start at 10, but we start seeing a decline after that,” says Joshua Hartshorne, an assistant professor of psychology at Boston College, who conducted this study as a postdoc at MIT.</p> <p>People who start learning a language between 10 and 18 will still learn quickly, but since they have a shorter window before their learning ability declines, they do not achieve the proficiency of native speakers, the researchers found. The findings are based on an analysis of a grammar quiz taken by nearly 670,000 people, which is by far the largest dataset that anyone has assembled for a study of language-learning ability.</p> <p>“It’s been very difficult until now to get all the data you would need to answer this question of how long the critical period lasts,” says Josh Tenenbaum, an MIT professor of brain and cognitive sciences and an author of the paper. “This is one of those rare opportunities in science where we could work on a question that is very old, that many smart people have thought about and written about, and take a new perspective and see something that maybe other people haven’t.”</p> <p>Steven Pinker, a professor of psychology at Harvard University, is also an author of the paper, which appears in the journal <em>Cognition</em> on May 1.</p> <p><strong>Quick learners</strong></p> <p>While it’s typical for children to pick up languages more easily than adults — a phenomenon often seen in families that immigrate to a new country — this trend has been difficult to study in a laboratory setting. Researchers who brought adults and children into a lab, taught them some new elements of language, and then tested them, found that adults were actually better at learning under those conditions. Such studies likely do not accurately replicate the process of long-term learning, Hartshorne says.</p> <p>“Whatever it is that results in what we see in day-to-day life with adults having difficulty in fully acquiring the language, it happens over a really long timescale,” he says.</p> <p>Following people as they learn a language over many years is difficult and time-consuming, so the researchers came up with a different approach. They decided to take snapshots of hundreds of thousands of people who were in different stages of learning English. By measuring the grammatical ability of many people of different ages, who started learning English at different points in their life, they could get enough data to come to some meaningful conclusions.</p> <p>Hartshorne’s original estimate was that they needed at least half a million participants — unprecedented for this type of study. Faced with the challenge of attracting so many test subjects, he set out to create a grammar quiz that would be entertaining enough to go viral.</p> <p>With the help of some MIT undergraduates, Hartshorne scoured scientific papers on language learning to discover the grammatical rules most likely to trip up a non-native speaker. He wrote questions that would reveal these errors, such as determining whether a sentence such as “Yesterday John wanted to won the race” is grammatically correct.&nbsp;</p> <p>To entice more people to take the test, he also included questions that were not necessary for measuring language learning, but were designed to reveal which dialect of English the test-taker speaks. For example, an English speaker from Canada might find the sentence “I’m done dinner” correct, while most others would not.</p> <p>Within hours after being posted on Facebook, the 10-minute quiz “<a href="">Which English?</a>” had gone viral.</p> <p>“The next few weeks were spent keeping the website running, because the amount of traffic we were getting was just overwhelming,” Hartshorne says. “That’s how I knew the experiment was sufficiently fun.”</p> <p><strong>A long critical period</strong></p> <p>After taking the quiz, users were asked to reveal their current age and the age at which they began learning English, as well as other information about their language background. The researchers ended up with complete data for 669,498 people, and once they had this huge amount of data, they had to figure out how to analyze it.</p> <p>“We had to tease apart how many years has someone been studying this language, when they started speaking it, and what kind of exposure have they been getting: Were they learning in a class or were they immigrants to an English-speaking country?” Hartshorne says.</p> <p>The researchers developed and tested a variety of computational models to see which was most consistent with their results, and found that the best explanation for their data is that grammar-learning ability remains strong until age 17 or 18, at which point it drops. The findings suggest that the critical period for learning language is much longer than cognitive scientists had previously thought.</p> <p>“It was surprising to us,” Hartshorne says. “The debate had been over whether it declines from birth, starts declining at 5 years old, or starts declining starting at puberty.”</p> <p>The authors note that adults are still good at learning foreign languages, but they will not be able to reach the level of a native speaker if they begin learning as a teenager or as an adult.</p> <p>"Although it has long been observed that learning a second language is easier early in life, this study provides the most compelling evidence to date that there is a specific time in life after which the ability to learn the grammar of a new language declines," says Mahesh Srinivasan, an assistant professor of psychology at the University of California at Berkeley, who was not involved in the study. “This is a major step forward for the field. The study also opens surprising, new questions, because it suggests that the critical period closes much later than previously thought."</p> <p>Still unknown is what causes the critical period to end around age 18. The researchers suggest that cultural factors may play a role, but there may also be changes in brain plasticity that occur around that age.</p> <p>“It’s possible that there’s a biological change. It’s also possible that it’s something social or cultural,” Tenenbaum says. “There’s roughly a period of being a minor that goes up to about age 17 or 18 in many societies. After that, you leave your home, maybe you work full time, or you become a specialized university student. All of those might impact your learning rate for any language.”</p> <p>Hartshorne now plans to run some related studies in his lab at Boston College, including one that will compare native and non-native speakers of Spanish. He also plans to study whether individual aspects of grammar have different critical periods, and whether other elements of language skill such as accent have a shorter critical period.</p> <p>The researchers also hope that other scientists will make use of their data, which they have posted <a href="">online</a>, for additional studies.</p> <p>“There are lots of other things going on in this data that somebody could analyze,” Hartshorne says. “We do want to draw other scientists’ attention to the fact that the data is out there and they can use it.”</p> <p>The research was funded by the National Institutes of Health and MIT’s Center for Minds, Brains, and Machines.</p> Research, Brain and cognitive sciences, Language, Learning, School of Science, National Institutes of Health (NIH) Celebrating great mentorship for graduate students MIT’s Committed to Caring Award selects third slate of dedicated professors. Tue, 24 Apr 2018 13:40:01 -0400 Office of Graduate Education <p>“When we talk about our experiences as graduate students at MIT, my colleagues and I tend to use words like ‘challenging,’ ‘rewarding,’ ‘inspiring,’ or ‘stressful’,” says Courtney Lesoon, the 2017-2018 Graduate Community Fellow for the Committed to Caring Program and a PhD student in the History, Theory and Criticism Section of the Department of Architecture. “Usually our discussions center around our research interests, new findings in our field, or upcoming deadlines.”</p> <p>The conversation about challenges and stresses at MIT, though, is arguably shifting. A number of new programs have been initiated across campus that prioritize emotional and mental health not just as supplementary to the lives of students, but as integral to them. Such programs include MindHandHeart, the campus coalition to support community wellness; work of the Institute Community and Equity Office (ICEO); Active Minds, the student-led initiative for better health and wellness; and Committed to Caring (C2C), which honors caring faculty on campus.</p> <p>In recent years, a growing <a href="" target="_blank">body of research</a> has highlighted the importance of advising and mentorship to graduate students’ academic experience and well-being. The Committed to Caring program recognizes that in graduate school, advisors and mentors set the tone for student experiences, and positive faculty support has the ability to shape student research and lives for the better. C2C honors professors who build inclusive cultures in their labs and classrooms, who support their students’ mental and emotional health, and who actively support their students’ scholarly pursuits. Selected faculty members are showcased via a broad campus poster campaign, individual profiles housed on the Office of Graduate Education website, and <em>MIT News </em>articles.</p> <p><strong>A celebration of caring</strong></p> <p>On April 11, a celebration was held to honor all past Committed to Caring awardees, as well as the 28 new awardees listed below. Profiles for the first two slates of C2C awardees may be found on the Committed to Caring website.</p> <p>The event, held in the Samberg Conference Center, was hosted by Vice Chancellor Ian Waitz and included remarks by Provost Martin Schmidt and Senior Associate Dean for Graduate Education Blanche Staton. Formal recognition of these new awardees will be ongoing throughout the 2018-2019 academic year, as pairs of posters and profiles are released each month.</p> <p>The following faculty members are the 2017-2018 recipients of the Committed to Caring Award:</p> <p>Emilio Baglietto, Department of Nuclear Science and Engineering</p> <p>Cullen Buie, Department of Mechanical Engineering</p> <p>Paola Cappellaro, Department of Nuclear Science and Engineering</p> <p>Gabriella Carolini , Department of Urban Studies and Planning</p> <p>Anna Frebel, Department of Physics</p> <p>Paula Hammond, Department of Chemical Engineering</p> <p>Wesley Harris, Department of Aeronautics and Astronautics</p> <p>Erin Kelly, Sloan School of Management</p> <p>Tom Kochan, Sloan School of Management</p> <p>Ju Li, Department of Materials Science and Engineering</p> <p>John Lienhard, Department of Mechanical Engineering</p> <p>Eytan Modiano, Department of Aeronautics and Astronautics</p> <p>Susan Murcott, Department of Urban Studies and Planning</p> <p>Bradley Olsen, Department of Chemical Engineering</p> <p>Agustin Rayo, Department of Linguistics and Philosophy</p> <p>Rebecca Saxe, Department of Brain and Cognitive Sciences</p> <p>Warren Seering, Department of Mechanical Engineering</p> <p>Julie Shah, Department of Aeronautics and Astronautics</p> <p>Matthew Shoulders, Department of Chemistry</p> <p>Hadley Sikes, Department of Chemical Engineering</p> <p>Justin Steil, Department of Urban Studies and Planning</p> <p>David Trumper, Department of Mechanical Engineering</p> <p>Lily Tsai, Department of Political Science</p> <p>Harry Tuller, Department of Materials Science and Engineering</p> <p>Evelyn Wang, Department of Mechanical Engineering</p> <p>Kamal Youcef-Toumi, Department of Mechanical Engineering</p> <p>Jinhua Zhao, Department of Urban Studies and Planning</p> <p>Ezra Zuckerman, Sloan School of Management</p> <p><strong>Student centered, student driven</strong></p> <p>Graduate students from across MIT’s campus are invited by the Office of Graduate Education (OGE) to nominate professors whom they believe to be outstanding mentors for the Committed to Caring Award. The nominations are then parsed by a selected committee composed primarily of graduate students, with additional representation by staff and faculty in the form of a prior recipient.</p> <p>Selection criteria for C2C include the scope and reach of advisor impact on the experience of graduate students, excellence in scholarship, and demonstrated commitment to diversity and inclusion. By recognizing the human element of graduate education, C2C aims to encourage good advising and mentorship across MIT’s campus. The C2C Program was conceived in 2014 by Monica Orta, then-OGE assitant director for diverisity initiatives, and implemented by Orta and OGE Communications officer Heather Konar.</p> <p>The work is driven each year by one graduate student who serves as the C2C Graduate Community Fellow and works closely with Konar. This year’s selection committee included Assistant Dean for Graduate Education Suraiya Baluch (chair), Professor Amy Glasmeier (previous C2C honoree), and graduate students Courtney Lesoon (2017-18 C2C Graduate Community Fellow), Claire Duvallet, Danielle Olson, and Jennifer Cherone (2016-17 C2C Graduate Community Fellow).</p> <p><strong>A process of affirmation</strong></p> <p>The C2C Program contributes to OGE’s mission of making graduate education at MIT “empowering, exciting, holistic, and transformative.” The opening of nominations in 2014 received a strong response, and the number and richness of nominations in subsequent rounds has only grown.</p> <p>Baluch remarked of the most recent selection round, “It was heartwarming to read the numerous accounts regarding acts of compassion, kindness and generosity of spirit in our community. It speaks to the power and impact acts of caring have that so many students felt compelled to participate in the nominating process. These acts were often simple, every day actions such as regularly inquiring about someone's wellbeing or sharing a meal as well as responding with humanity to life's struggles.”</p> <p>In 2017, the OGE received 114 nominations for 72 faculty members across campus. Committee members expressed being deeply moved by the thoughtful, sincere, and touching nominations that were submitted. Blanche Staton, senior associate dean for graduate education, says “I am grateful to our students for recognizing the caring and positive spirit and the contributions of our faculty, and I join them in applauding the professors who, by their example, show us all what it truly means to ‘advance a caring and respectful community’."</p> <p><strong>Guideposts for strong mentoring</strong></p> <p>As the committee reviewed this past year’s nominations, a number of striking themes emerged. Supported by numerous personal quotes, fellow Courtney Lesoon and the C2C team developed a list of “Mentoring Guideposts” that reflect acts of mentorship that seem to be the most meaningful and formative.</p> <p>MIT graduate students were moved to nominate mentors who:</p> <ul> <li>actively show empathy for students’ personal experiences;</li> <li>advocate for students both academically and personally;</li> <li>validate students by demonstrating interest in their research and ideas;</li> <li>encourage and support students in developing a healthy work/life balance;</li> <li>have courageous conversations about issues that impact students outside of MIT, such as political developments, personal loss, or housing needs;</li> <li>initiate contact with students, check in consistently, and provide extra support as needed;</li> <li>provide a channel for students to express their difficulties, including the means to do so anonymously;</li> <li>foster a friendly and inclusive work environment;</li> <li>emphasize learning, development, and practice over achievement and goals; and</li> <li>advise informally, teaching students about the system of academia, the importance of networking, and professional development skills.</li> </ul> <p>The C2C team is exploring ideas to disseminate the guideposts widely across campus.</p> Honorees Ahmed Ghoniem (left) and Wesley Harris enjoy the Committed to Caring celebration on April 11.Photo: Joseph LeeAwards, honors and fellowships, Faculty, Graduate, postdoctoral, Nuclear science and engineering, Mechanical engineering, Urban studies and planning, Physics, Chemical engineering, Aeronautical and astronautical engineering, DMSE, Linguistics, Philosophy, Brain and cognitive sciences, Chemistry, Political science, School of Science, School of Engineering, School of Humanities Arts and Social Sciences, School of Architecture and Planning, Sloan School of Management, Vice Chancellor, Education, teaching, academics Institute Professor Emeritus Morris Halle, innovative and influential linguist, dies at 94 Scholar conducted groundbreaking research, helped found MIT’s linguistics program, and inspired generations of students. Tue, 03 Apr 2018 14:30:00 -0400 Peter Dizikes | MIT News Office <p>Institute Professor Emeritus Morris Halle, one of the most accomplished and influential scholars in the field of linguistics, died of natural causes on Monday at age 94.</p> <p>Halle was an expert in phonology, the structure of sounds in language. His wide-ranging work helped establish his own field as an important domain of research and helped systematize inquiry into the subject. Halle’s work was part of a revolution in linguistics that helped scholars understand human language as a phenomenon with a deep and universal structure, which stemmed from distinctive human faculties.</p> <p>Beyond his own research, Halle helped found MIT’s renowned linguistics program and helped imbue it with its intellectual ethos, by encouraging meticulous research, a fruitful combination of empirical work and theory, and a spirit of collaborative, open-ended inquiry, which Halle exemplified throughout his own academic life.</p> <p>Halle and Institute Professor Emeritus Noam Chomsky developed groundbreaking research together in the 1950s and 1960s — after Halle played a key role in bringing Chomsky to MIT in 1955. Together, Chomsky and Halle worked to specify the innate foundations dictating the structure of human language, extending Chomsky’s work on syntax into a rule-governed framework describing the sounds produced in English.</p> <p>“Morris and I were very close for almost 70 years, working together, sharing much else,” Chomsky told <em>MIT News</em> in response to Halle’s death.&nbsp;“His contributions to modern linguistic science are incalculable, not least right here at MIT, where even apart from his groundbreaking work, he was primarily responsible for creating what quickly became, and has remained, the center of a research enterprise that has flourished all over the world, far beyond anything in the millennia of inquiry into language.&nbsp;[Halle was] a wise and compassionate person, more than anyone I’ve known, whose kindness, warmth, and care touched many lives.”</p> <p>David Pesetsky, the head of the MIT Department of Linguistics and Philosophy, said Halle was a profoundly influential researcher and educator — and a touchstone for MIT linguists over his whole career.</p> <p>“Morris was an epoch-defining figure in the history of modern linguistics — not only for his scientific contributions, which helped launch the modern era of our field, but also for his revolutionary approach to graduate education,” said Pesetsky, the Ferrari P. Ward Professor of Modern Languages and Linguistics.</p> <p>Pesetsky added: “Here at MIT, Morris created a linguistics program unlike any other at the time, which involved students in the process of discovery from the very beginning, took their ideas and findings seriously, and demanded no less from the students themselves. This vision, a very MIT vision, is now everywhere in linguistics, but it was Morris's vision. But Morris was more than just the visionary father of our program and a model scientist. He was also the soul of our program, whom we loved and turned to for direction and advice as students and for years thereafter.”</p> <p>Donca Steriade, the Class of 1941 Professor in the Department of Linguistics and Philosophy at MIT, and a former PhD student of Halle, heralded his commitment to teaching and the confidence he instilled in his students.</p> <p>“What his former students remember more than Morris’ advice and ideas — wise and abundant as these were — is that he treated us as individually responsible for the future of our field,” Steriade said. “It really concentrates the mind to understand this, within weeks of landing at MIT. He saw his job as that of equipping us with a bit of background knowledge and making us appreciate some of the unsolved problems. The hope was that we would decide to solve them, and acquire an education in the process.”</p> <p><strong>Creating a department, making a revolution</strong></p> <p>Morris Halle was born in Latvia in 1923, in a Jewish family. His father, a businessman, moved his branch of the family to the United States in 1940, after Germany invaded Poland. However, other relatives of Halle did not emigrate, and only a small portion of Latvia’s Jewish population survived the Nazi occupation.</p> <p>Halle first studied at the City College of New York, before entering the U.S. military and serving in France during World War II. Halle later downplayed his military efforts — “I just did what they told me,” <a href="">he said to <em>MIT News</em></a> in 2010 — and returned to the U.S. to resume his education. After receiving a master’s degree in linguistics from the University of Chicago and then studying at Columbia University, Halle received his PhD in linguistics from Harvard under an eminent scholar, Roman Jakobson.</p> <p>Among many other skills, Halle was a polyglot whose unusual fluency in languages extended to English, German, French, Russian, Latvian, Yiddish, and Hebrew. That facility helped Halle in 1951 earn a job at MIT teaching German and Russian, as the Institute did not have a formal department devoted to research and teaching in linguistics.<br /> <br /> But Halle had an expansive vision for linguistics at MIT. He saw the possibilities for a full-fledged research department, one that could pursue work on innovative ideas. That vision started becoming a reality after Halle, who was conducting acoustic research of Russian at MIT’s Research Laboratory for Electronics (RLE), hired the scholar Carol Chomsky, who would later become a linguist at Harvard. As a result, Halle got to know her husband, Noam Chomsky.</p> <p>In their first conversation, as Noam Chomsky recounted to <em>MIT News</em> in 2012, Halle and Chomsky “immediately had a big argument about something, and later I thought he had some good points.” Chomsky added: “Anyway, we very quickly became close friends.” Halle, Chomsky, and biologist Eric Lenneberg soon began an extended dialogue about their novel approach to linguistic explanation, and ultimately Chomsky joined the MIT faculty.</p> <p>“In the summer of 1955, Chomsky, with whom I was friends, needed a job,” Halle told <em>MIT News</em>. “So I went to the department head and I said, ‘Why don’t we hire Chomsky?’”</p> <p>It was a sound recommendation. Chomsky helped change linguistics with his concept of universal grammar, which holds that language is not simply acquired through social learning, but relies on an innate faculty — which in turn implies that all languages have common structures. Chomsky worked primarily on syntax, the principles underlying the organization of language, but Chomsky and Halle collaborated extensively to extend these concepts to phonology.</p> <p>One result of the Chomsky-Halle partnership was their seminal 1968 book, “The Sound Pattern of English” — often simply called “SPE” — which systematically set out the rules that convert strings of abstract minimal units (known as phonemes) into uttered sounds, in English. According to this hypothesis, the path from mind to speech runs through a set of ordered rules, each one transforming its input in a specified way.</p> <p>In the half-century since SPE appeared, alternate hypotheses have emerged to account for the formation of sounds in English. But the book has had immense influence, pushing the field forward by developing such a broad explanatory framework.</p> <p>Halle’s work extended well beyond the general hypothesis presented in SPE. Among other things, Halle also conducted far-ranging research into the sound patterns of Russian. And in the 1990s, Halle also spent years developing another broad theoretical framework connecting syntax and sound, termed “distributed morphology,” which he created along with linguist Alec Marantz, among other contributors. This refinement outlined a set of operations through which morphemes, the smallest units of sound, are mobilized in the process of speech.</p> <p><strong>“Argue with me!” </strong></p> <p>Among Halle’s many characteristics as a scholar, one of the most enduring was his absolute belief that intellectual research and debate should occur on the basis of substantive insight, without regard to the formal academic hierarchy. Generations of students heard Halle utter a familiar phrase — “Argue with me!” — that served as an invitation for reasoned, empirical discussion.</p> <p>That form of mentorship was powerful, and it sought to produce intellectual progress rather than rigid disciples. In the 2016 edition of the <em>Annual Review of Linguistics</em>, Mark Liberman, a professor of linguistics at the University of Pennsylvania, wrote that Halle, apart from his vast output of research, “has had an equally profound influence through his role as a teacher and mentor, and this personal influence has not been limited to students who follow closely in his intellectual and methodological footsteps. It has been just as strong — or stronger — among researchers who disagree with his specific ideas and even his general approach, or who work in entirely different subfields.”</p> <p>In a 1974 lecture that Halle made before the members of the Linguistic Society of America, he characteristically declared that “the linguist must be prepared to lose as well as to win.” Asked about that declaration in 2010, Halle instantly quipped, “I was younger then.” However, he quickly added, “If you believe something, put your money where your mouth is, and 10 years later you will know if you were right or wrong. Nobody else will need to tell you when you’re whipped.”</p> <p>Halle’s graduate students, by many accounts (including a large number compiled by Liberman in that same article) often found him daunting and exacting at first, but came to realize how invested Halle was in them, and how supportive he was as their work progressed.&nbsp;</p> <p>“Soon after Morris founded MIT’s linguistics department,” Steriade said, “significant contributions to this research program were made by Morris’ and Noam’s students and colleagues, and then by their own hundreds of students and students’ students. Most phonologists active today trace their academic lineage, directly or indirectly, to Morris.”</p> <p>Just last week, Steriade said, Halle told two former students, “We did some good teaching.”</p> <p>Samuel Jay Keyser, an emeritus professor of linguistics who knew Halle for decades, on Monday paid tribute to Halle’s influential collaboration with Chomsky and his lifetime of good works on behalf of others.</p> <p>“It is impossible to think of Morris Halle without thinking of Noam Chomsky,” Keyser observed. “These two individuals were the pillars upon which theoretical linguistics in the 20th century and beyond was built. They are responsible not only for the best linguistics department in the world, but for sending hundreds of students out into the world to try to do the same.”</p> <p>Keyser recalled telling students that “Morris should be their role model. He is not superhuman, but he is as good as a human being can be. … He was my mentor, my role model, and my friend for over 50 years. Now, sad to say, one of the pillars has crumbled. It had to come. But that doesn’t make it any easier.”</p> <p>For his part, Pesetsky summarized the centrality of Halle to MIT’s linguistics program, and the degree of respect his colleagues held for him, by reflecting on a large <a href="">50th anniversary celebration</a> the linguistics department held in 2011, with multiple days of talks and events.</p> <p>“When our program celebrated its 50th year in 2011, we had Morris's friend and lifelong co-conspirator Noam Chomsky as the final speaker on the second-to-last day of our celebration, but the final slot on the final day belonged to Morris,” Pesetsky said. “None of the several hundred alumni and colleagues in the room needed to be told why, and no other ending was ever discussed.”</p> <p>Halle was married for 56 years to his wife Rosamond, who died in 2011. Halle is survived by his sons, David, John, and Tim, and his grandchildren, Casey, Ben, Cecilia, and Rosie. An MIT service honoring Halle <a href="">is planned</a> for Saturday, May 5.</p> Morris Halle, Institute Professor Emeritus of LinguisticsImage: Kai von Fintel Linguistics, School of Humanities Arts and Social Sciences, Obituaries, Faculty MIT and Harvard join forces to address early childhood literacy Reach Every Reader aims to end early literacy crisis. Tue, 06 Mar 2018 12:00:00 -0500 MIT Open Learning <p>Today, MIT’s Integrated Learning Initiative (MITili), Harvard Graduate School of Education (HGSE), and Florida State University (FSU) <a href="">announced a collaboration</a> to make sure every child learns to read well enough by the end of third grade to make learning more effective later in their education. This will be achieved through research on how personalized learning and intervention improve early childhood literacy.</p> <p>Research shows that students who fail to read adequately in first grade have a 90 percent probability of reading poorly in fourth grade, and a 75 percent probability of reading poorly in high school. This compounds the need to level the playing field in literacy early in a child’s education.</p> <p>This new collaboration, called Reach Every Reader, brings MITili, HGSE, and Florida State University researchers together to work on rigorous scientific approaches to personalized learning for literacy, to develop diagnostic tools and interventions to help young children at risk for literacy <em>before</em> they fail, and to build capacity among educators, caregivers, and policy makers to advance ongoing conversations and instructional strategies around personalized learning.&nbsp; The initiative is supported by a $30 million grant from Priscilla Chan and Mark Zuckerberg, co-founders of the Chan Zuckerberg Initiative.</p> <p>"For a young child, struggling to read can be a crushing blow with lifelong consequences. Multiply that experience by millions of children, and it's a crisis for our society," says MIT President L. Rafael Reif. "At MIT, we approach the problem as scientists and engineers: by seeking to understand the brain science of how learning happens, and by building innovative technologies and solutions to help. We are delighted to be able to contribute in these ways to the exciting collaboration behind Reach Every Reader."</p> <p>“We are excited to support the launch of Reach Every Reader, a unique combination of cutting-edge education and neuroscience research to better understand how we can help every kid stay on track to reading on grade level by the end of third grade. I know from my work at The Primary School how important it is to identify learning barriers students face early and provide them with the right supports to succeed,” says Chan, who is also the founder and CEO of The Primary School. "This new program represents the type of bold, innovative thinking that we believe will help build a future for everyone and enable transformative learning experiences.”</p> <p>“This new collaboration between MITili and HGSE synergizes MIT’s strengths in science and engineering with HGSE’s expertise in the education of children. In addition, working with researchers in the Florida Center for Reading Research and College of Communication and Information at FSU will help us gain expertise in early literacy screening and assessment. We need all this knowledge to improve education, especially for children most vulnerable to falling behind,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor in brain and cognitive sciences, and director of MITili.</p> <p>Gabrieli and his collaborators are developing a web-based tool for the early identification of reading challenges to help direct children immediately toward personalized interventions. This work builds on Gabrieli’s research on the neural and cognitive development of learning in children, and the ways neuroscience can inform and advance educational outcomes. A key component of Reach Every Reader is to examine the interventions that work for which student, building substantive research in this emerging field. The team will work with school partners to deliver these interventions to kindergarten students in summer programs and, longer term, implement these tools into the school curriculum.</p> <p>“Nothing is more fundamental to all aspects of education and citizenship than the power to read,” Gabrieli explains. “This collaboration is inspired by the mission of trying to have every child, regardless of circumstance, learn to read well enough by third grade so that every child can read to learn throughout the schooling and workplace years.”</p> <p>Reach Every Reader is part of MITili’s larger vision to advance multidisciplinary research on the science of learning that will inform and strengthen approaches to preK-12 education. “Science is continuously shedding more light on how we learn, and how we ought to teach,” says Sanjay Sarma, vice president for Open Learning at MIT. “MITili and HGSE are addressing early childhood literacy head-on through this collaboration.”</p> <p>The initiative is funded by Chan and Zuckerberg, who founded the Chan Zuckerberg Initiative (CZI) together in 2015. The philanthropic organization supports a range of educational research initiatives, focusing on four key milestones: kindergarten readiness, third-grade literacy and math, high school transitions, and postsecondary success.</p> Research shows that students who fail to read adequately in first grade have a 90 percent probability of reading poorly in fourth grade, and a 75 percent probability of reading poorly in high school.Office of Open Learning, MITili, Brain and cognitive sciences, School of Science, K-12 education, education, Education, teaching, academics, Office of Digital Learning, Learning, Language Outstanding MIT students of French explore &quot;Paris et la rue&quot; In the January Scholars in France program, students discover behind-the-scenes Paris and the city&#039;s storied streets. Mon, 05 Mar 2018 15:00:01 -0500 School of Humanities, Arts, and Social Sciences <p>Think of Paris, and images materialize of sublime art and cosmopolitan sophistication. “We all romanticize the culture, and it’s fine to do that,” says Bruno Perreau, the Cynthia L. Reed Professor and associate professor of French studies. “But we also need to add different layers and rethink the connection between myth and reality,” he says.<br /> <br /> In search of this connection, Perreau brought seven students to Paris for the annual January Scholars in France program, offered during MIT’s independent activities period. Chosen through a competitive process, the January Scholars students are among the best in MIT's French studies and language program, and spoke exclusively in French during their stay in Paris. For their travels, the students receive airfare, lodging in a youth hostel, transportation and meals, courtesy of the French Initiatives Endowment Fund.<br /> <br /> Led by expert local guides, the group pursued a theme, “Paris et la rue” (Paris and the street), which took them beyond the usual tourist spots and into lesser-known residential and business neighborhoods.&nbsp;During walking tours, the students peeled back layers of history, learned about city planning past and present, and glimpsed behind-the-scenes views of workaday, contemporary Paris. They explored the history of street revolutions in 18th and 19th century Paris, issues of public transportation, new architectural projects, and street art.&nbsp; It was an itinerary that encouraged students “to encounter aspects of Parisian life they couldn’t have imagined,” says Perreau.<br /> &nbsp;<br /> <strong>Another side of&nbsp;Paris</strong><br /> <br /> “We got to learn about things like the design process behind the trash cans and the type of barricades built by revolting Parisians from the 17th straight through to the 20th century,” recounts Anelise Newman, a junior majoring in electrical engineering and computer science.<br /> <br /> “We saw parts of the city, like the business sector and the atelier and works of Raymond Moretti … and most importantly, we got to interact with Parisians not as tourists, but as students who were genuinely interested in learning the culture and mastering the language,” says Newman, who also wrote a <a href="">blog post</a> for MIT Admissions about the trip.<br /> <br /> Unexpected episodes enlivened and enriched their daily tours. In a visit to the 13th&nbsp;arrondissement, which began in the 19th&nbsp;century as a factory district and is now home to public housing and a vibrant Asian population, the group took in the many giant murals plastered on the sides of buildings.<br /> <br /> “We triggered reactions from locals, who argued with us about their favorite or most hated pieces of public art,” recalls Perreau. “Students were surprised about how attached people were to their personal visions of the city.”<br /> <br /> At La Defense, a sprawling business district dotted by high-rises with a subterranean infrastructure for highways, parking, and the Metro, the group found unexpected adventure. The city’s chief archivist and a security detail opened a series of locked doors, and descending a stairway with flashlights, revealed a hidden area.<br /> <br /> “We were taken underground to see the atelier and works of Raymond Moretti, a sculptor who passed away 13 years ago,” says Rebecca Grekin, a chemical engineering major. “We felt so privileged to be invited into a place that was normally off limits.”<br /> <br /> Perreau, who likened the concealed cavern to “a cathedral or grotto,” was astonished to find himself face to face with a gigantic sculpture nicknamed “the monster” because of the roar from nearby subway trains.<br /> <br /> “We had this feeling of being explorers,” he says. “I saw another side of Paris that had been concealed from me, even after having lived there for years.”<br /> <br /> <strong>Pulling back the curtain</strong><br /> <br /> Even at some of the more glittery Parisian venues, MIT travelers were able to pull back the curtain and gain unusual perspectives. During a private tour of the Palais Garnier, home to the Paris Opera Ballet, the troupe’s star dancer, Germain Louvet, showed them spaces normally inaccessible to the public: a fake ceiling behind which gentleman from high society once chose dancers with whom to consort, and in the basement, a tank full of water intended in the 19th&nbsp;century to douse fires, but now full of koi fish.<br /> <br /> “I grew up dancing in Accra, Ghana and was obsessed by the Paris ballet,” says Sefa Yakpo, a senior double majoring in management science and French. “So first I couldn’t believe I was standing backstage with the étoile (star), and later I was literally speechless when we went out to a café with him and learned about his life,” she says.<br /> <br /> To top off this prized experience, the group attended a performance the following night of the ballet Don Quixote at the Bastille Opera, where they witnessed a once-in-a lifetime crowning of a female star dancer.</p> <p>Yakpo, who had worked in Paris the previous summer for a consultant firm, felt as if she was seeing the city for the first time. “I walked on very familiar streets, but peeling off layers of history and understanding the politics and culture of these places showed me how a city can have many different faces,” she says.<br /> <br /> One of Perreau’s goals was to “shift students’ perceptions of Paris and France, to build new understanding” while having fun together and enjoying the many riches the city has to offer. “There is something about pleasure at the heart of the program,” he says.<br /> <br /> Perreau may have succeeded in ways he didn’t anticipate. Grekin returned to Boston determined to continue the French experience. “I am going to keep practicing the language with a friend I made on the trip, and start going to the Boston Symphony Orchestra,” she says.</p> <p>Safpo found the Paris sojourn a balm for the soul. “At MIT, where at times facts and solving problems make life seem clinical, you can forget to embrace something just because it’s beautiful,” she says. “Music, dance, art — things that touch us — are like magic, and Paris reminded me of the importance of that.”</p> <h5><em>Story prepared by SHASS Communications<br /> Editorial and Design Director: Emily Hiestand<br /> Writer: Leda Zimmerman</em></h5> In the 13th district, the students discovered "Liberté, Égalité, Fraternité," a building-mural designed by the artist Shepard Fairey, who gifted it to Paris after the attacks of 2015.Photo: Bruno PerreauSHASS, France, Global Studies and Languages, Humanities, Faculty, Classes and programs, Students, Undergraduate, Language, Independent Activities Period, School of Humanities Arts and Social Sciences MIT rates No. 1 in 12 subjects in 2018 QS World University Rankings MIT ranked within top 5 in 19 out of 48 subject areas. Wed, 28 Feb 2018 12:00:01 -0500 Stephanie Eich | Resource Development <p>MIT has been honored with 12 No. 1 subject rankings in the QS World University Rankings for 2018.</p> <p>MIT received a No. 1 ranking in the following QS subject areas: Architecture/Built Environment; Linguistics; Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Electrical and Electronic Engineering; Mechanical, Aeronautical and Manufacturing Engineering; Chemistry; Materials Science; Mathematics; Physics and Astronomy; and Statistics and Operational Research. &nbsp;&nbsp;</p> <p>Additional high-ranking MIT subjects include: Art and Design (No. 4), Biological Sciences (No. 2), Earth and Marine Sciences (No. 3), Environmental Sciences (No. 3), Accounting and Finance (No. 2), Business and Management Studies (No. 4), and Economics and Econometrics (No. 2).</p> <p>Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings cover 48 disciplines and are based on an institute’s research quality and accomplishments, academic reputation, and graduate employment.</p> <p>MIT has been ranked as the No. 1 university in the world by QS World University Rankings for six&nbsp;straight years.</p> Photo: Patrick GilloolyRankings, Computer science and technology, Linguistics, Chemical engineering, Civil and environmental engineering, Mechanical engineering, Chemistry, Materials science, Mathematics, Physics, Economics, Design, EAPS, Business and management, Accounting, Finance, DMSE, School of Engineering, School of Science, School of Architecture and Planning, Sloan School of Management, SHASS, Electrical Engineering & Computer Science (eecs), Architecture, School of Humanities Arts and Social Sciences The writing on the wall Did humans speak through cave art? New paper links ancient drawings and language’s origins. Wed, 21 Feb 2018 00:00:00 -0500 Peter Dizikes | MIT News Office <p>When and where did humans develop language? To find out, look deep inside caves, suggests an MIT professor. &nbsp;</p> <p>More precisely, some specific features of cave art may provide clues about how our symbolic, multifaceted language capabilities evolved, according to a new paper co-authored by MIT linguist Shigeru Miyagawa.</p> <p>A key to this idea is that cave art is often located in acoustic “hot spots,” where sound echoes strongly, as some scholars have observed. Those drawings are located in deeper, harder-to-access parts of caves, indicating that acoustics was a principal reason for the placement of drawings within caves. The drawings, in turn, may represent the sounds that early humans generated in those spots.</p> <p>In the new paper, this convergence of sound and drawing is what the authors call a “cross-modality information transfer,” a convergence of auditory information and visual art that, the authors write, “allowed early humans to enhance their ability to convey symbolic thinking.” The combination of sounds and images is one of the things that characterizes human language today, along with its symbolic aspect and its ability to generate infinite new sentences.</p> <p>“Cave art was part of the package deal in terms of how <em>homo sapiens</em> came to have this very high-level cognitive processing,” says Miyagawa, a professor of linguistics and the Kochi-Manjiro Professor of Japanese Language and Culture at MIT. “You have this very concrete cognitive process that converts an acoustic signal into some mental representation and externalizes it as a visual.”</p> <p>Cave artists were thus not just early-day Monets, drawing impressions of the outdoors at their leisure. Rather, they may have been engaged in a process of communication.</p> <p>“I think it’s very clear that these artists were talking to one another,” Miyagawa says. “It’s a communal effort.”</p> <p>The paper, “Cross-modality information transfer: A hypothesis about the relationship among prehistoric cave paintings, symbolic thinking, and the emergence of language,” is being published in the journal <em>Frontiers in Psychology</em>. The authors are Miyagawa; Cora Lesure, a PhD student in MIT’s Department of Linguistics; and Vitor A. Nobrega, a PhD student in linguistics at the University of Sao Paulo, in Brazil. &nbsp;</p> <p><strong>Re-enactments and rituals?</strong></p> <p>The advent of language in human history is unclear. Our species is estimated to be about 200,000 years old. Human language is often considered to be at least 100,000 years old.</p> <p>“It’s very difficult to try to understand how human language itself appeared in evolution,” Miyagawa says, noting that “we don’t know 99.9999 percent of what was going on back then.” However, he adds, “There’s this idea that language doesn’t fossilize, and it’s true, but maybe in these artifacts [cave drawings], we can see some of the beginnings of <em>homo sapiens</em> as symbolic beings.”</p> <p>While the world’s best-known cave art exists in France and Spain, examples of it abound throughout the world. One form of cave art suggestive of symbolic thinking — geometric engravings on pieces of ochre, from the Blombos Cave in southern Africa — has been estimated to be at least 70,000 years old. Such symbolic art indicates a cognitive capacity that humans took with them to the rest of the world.</p> <p>“Cave art is everywhere,” Miyagawa says. “Every major continent inhabited by <em>homo sapiens</em> has cave art. … You find it in Europe, in the Middle East, in Asia, everywhere, just like human language.” In recent years, for instance, scholars have catalogued Indonesian cave art they believe to be roughly 40,000 years old, older than the best-known examples of European cave art.</p> <p>But what exactly was going on in caves where people made noise and rendered things on walls? Some scholars have suggested that acoustic “hot spots” in caves were used to make noises that replicate hoofbeats, for instance; some 90 percent of cave drawings involve hoofed animals. These drawings could represent stories or the accumulation of knowledge, or they could have been part of rituals.</p> <p>In any of these scenarios, Miyagawa suggests, cave art displays properties of language in that “you have action, objects, and modification.” This parallels some of the universal features of human language — verbs, nouns, and adjectives — and Miyagawa suggests that “acoustically based cave art must have had a hand in forming our cognitive symbolic mind.”</p> <p><strong>Future research: More decoding needed</strong></p> <p>To be sure, the ideas proposed by Miyagawa, Lesure, and Nobrega merely outline a working hypothesis, which is intended to spur additional thinking about language’s origins and point toward new research questions.</p> <p>Regarding the cave art itself, that could mean further scrutiny of the syntax of the visual representations, as it were. “We’ve got to look at the content” more thoroughly, says Miyagawa. In his view, as a linguist who has looked at images of the famous Lascaux cave art from France, “you see a lot of language in it.” But it remains an open question how much a re-interpretation of cave art images would yield in linguistics terms.</p> <p>The long-term timeline of cave art is also subject to re-evaluation on the basis of any future discoveries. If cave art is implicated in the development of human language, finding and properly dating the oldest known such drawings would help us place the orgins of language in human history — which may have happened fairly early on in our development.</p> <p>“What we need is for someone to go and find in Africa cave art that is 120,000 years old,” Miyagawa quips.</p> <p>At a minimum, a further consideration of cave art as part of our cognitive development may reduce our tendency to regard art in terms of our own experience, in which it probably plays a more strictly decorative role for more people.&nbsp;</p> <p>“If this is on the right track, it’s quite possible that … cross-modality transfer helped develop a symbolic mind,” Miyagawa says. In that case, he adds, “art is not just something that is marginal to our culture, but central to the formation of our cognitive abilities.”</p> While the world’s best-known cave art exists in France and Spain, examples of it abound throughout the world. Image: stock image of a cave painting in South AfricaSHASS, Social sciences, Linguistics, Research, Art, School of Humanities Arts and Social Sciences Back-and-forth exchanges boost children’s brain response to language Study finds engaging young children in conversation is more important for brain development than “dumping words” on them. Tue, 13 Feb 2018 23:59:59 -0500 Anne Trafton | MIT News Office <p>A landmark 1995 study found that children from higher-income families hear about 30 million more words during their first three years of life than children from lower-income families. This “30-million-word gap” correlates with significant differences in tests of vocabulary, language development, and reading comprehension.</p> <p>MIT cognitive scientists have now found that conversation between an adult and a child appears to change the child’s brain, and that this back-and-forth conversation is actually more critical to language development than the word gap. In a study of children between the ages of 4 and 6, they found that differences in the number of “conversational turns” accounted for a large portion of the differences in brain physiology and language skills that they found among the children. This finding applied to children regardless of parental income or education.</p> <p>The findings suggest that parents can have considerable influence over their children’s language and brain development by simply engaging them in conversation, the researchers say.</p> <p>“The important thing is not just to talk to your child, but to talk with your child. It’s not just about dumping language into your child’s brain, but to actually carry on a conversation with them,” says Rachel Romeo, a graduate student at Harvard and MIT and the lead author of the paper, which appears in the Feb. 14 online edition of <em>Psychological Science</em>.</p> <div class="cms-placeholder-content-video"></div> <p>Using functional magnetic resonance imaging (fMRI), the researchers identified differences in the brain’s response to language that correlated with the number of conversational turns. In children who experienced more conversation, Broca’s area, a part of the brain involved in speech production and language processing, was much more active while they listened to stories. This brain activation then predicted children’s scores on language assessments, fully explaining the income-related differences in children’s language skills.&nbsp;</p> <p>“The really novel thing about our paper is that it provides the first evidence that family conversation at home is associated with brain development in children. It’s almost magical how parental conversation appears to influence the biological growth of the brain,” says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.</p> <p><strong>Beyond the word gap</strong></p> <p>Before this study, little was known about how the “word gap” might translate into differences in the brain. The MIT team set out to find these differences by comparing the brain scans of children from different socioeconomic backgrounds.</p> <p>As part of the study, the researchers used a system called Language Environment Analysis (LENA) to record every word spoken or heard by each child. Parents who agreed to have their children participate in the study were told to have their children wear the recorder for two days, from the time they woke up until they went to bed.</p> <p>The recordings were then analyzed by a computer program that yielded three measurements: the number of words spoken by the child, the number of words spoken to the child, and the number of times that the child and an adult took a “conversational turn” — a back-and-forth exchange initiated by either one.</p> <p>The researchers found that the number of conversational turns correlated strongly with the children’s scores on standardized tests of language skill, including vocabulary, grammar, and verbal reasoning. The number of conversational turns also correlated with more activity in Broca’s area, when the children listened to stories while inside an fMRI scanner.</p> <p>These correlations were much stronger than those between the number of words heard and language scores, and between the number of words heard and activity in Broca’s area.</p> <p>This result aligns with other recent findings, Romeo says, “but there’s still a popular notion that there’s this 30-million-word gap, and we need to dump words into these kids — just talk to them all day long, or maybe sit them in front of a TV that will talk to them. However, the brain data show that it really seems to be this interactive dialogue that is more strongly related to neural processing.”</p> <p>The researchers believe interactive conversation gives children more of an opportunity to practice their communication skills, including the ability to understand what another person is trying to say and to respond in an appropriate way.</p> <p>While children from higher-income families were exposed to more language on average, children from lower-income families who experienced a high number of conversational turns had language skills and Broca’s area brain activity similar to those of children who came from higher-income families.</p> <p>“In our analysis, the conversational turn-taking seems like the thing that makes a difference, regardless of socioeconomic status. Such turn-taking occurs more often in families from a higher socioeconomic status, but children coming from families with lesser income or parental education showed the same benefits from conversational turn-taking,” Gabrieli says.</p> <p><strong>Taking action</strong></p> <p>The researchers hope their findings will encourage parents to engage their young children in more conversation. Although this study was done in children age 4 to 6, this type of turn-taking can also be done with much younger children, by making sounds back and forth or making faces, the researchers say.</p> <p>“One of the things we’re excited about is that it feels like a relatively actionable thing because it’s specific. That doesn’t mean it’s easy for less educated families, under greater economic stress, to have more conversation with their child. But at the same time, it’s a targeted, specific action, and there may be ways to promote or encourage that,” Gabrieli says. &nbsp;</p> <p>Roberta Golinkoff, a professor of education at the University of Delaware School of Education, says the new study presents an important finding that adds to the evidence that it’s not just the number of words children hear that is significant for their language development.</p> <p>“You can talk to a child until you’re blue in the face, but if you’re not engaging with the child and having a conversational duet about what the child is interested in, you’re not going to give the child the language processing skills that they need,” says Golinkoff, who was not involved in the study. “If you can get the child to participate, not just listen, that will allow the child to have a better language outcome.”</p> <p>The MIT researchers now hope to study the effects of possible interventions that incorporate more conversation into young children’s lives. These could include technological assistance, such as computer programs that can converse or electronic reminders to parents to engage their children in conversation.</p> <p>The research was funded by the Walton Family Foundation, the National Institute of Child Health and Human Development, a Harvard Mind Brain Behavior Grant, and a gift from David Pun Chan.</p> MIT cognitive scientists have found that conversation between an adult and a child appears to change the child’s brain. Research, Language, Learning, Brain and cognitive sciences, McGovern Institute, School of Science, Health sciences and technology, National Institutes of Health (NIH) Applying philosophy for a better democracy In Justin Khoo’s new class, students explore how language affects censorship, dissent, truth, and propaganda. Fri, 09 Feb 2018 09:35:01 -0500 School of Humanities, Arts, and Social Sciences <p>Why would a politician publicly contradict himself? What does it mean to call someone a racist? Can you control hate speech without eroding free speech? What happens to democracy if truth is subjective?<br /> <br /> These are among the rich range of topics discussed in 24.192 (Language, Information, and Power), an undergraduate philosophy seminar taught by Assistant Professor Justin Khoo. Offered for just the second time last fall, the class explores philosophical and political issues surrounding discourse, with a focus on connecting philosophical ideas to current controversies about speech.<br /> <br /> “The class has been a great space for discussion about very relevant issues we face today. In 24.192 we regularly engaged on topics of bigotry, ideal political discourse, and the ever-growing polarization of partisanship and media sources,” says Joseph Edwards, a sophomore in electrical engineering and computer science. “This class has put philosophy and linguistics on the front line in terms of how we look at our cultures and societies.”<br /> <br /> “We’re not just playing with ideas, but ideally articulating viewpoints in a way that can change hearts and minds,” says Khoo. “It’s a lofty goal.”</p> <p><strong>Philosophy of language as a framework for difficult questions</strong></p> <p>Readings range from classics of philosophy such as John Stuart Mill’s “On Liberty” to works by contemporary philosophers, such as University of Connecticut professor Michael Lynch, as well as a wide selection of opinion articles on such topics as censorship, racism, and the nature of truth in politics.<br /> <br /> “We have to distinguish between what is objectionable within a certain context and the idea that a view should never be expressed ever,” says Khoo, noting that the philosophy of language provides a framework for considering such difficult questions. “When you call a person ‘racist,’ it’s not just the dictionary definition. You are judging them in a particular way. You are labeling them a bad person.”<br /> <br /> With that in mind, is “racist” a useful label? Khoo says that the tool philosophy provides for considering such questions is abstraction — getting away from the particularities of today’s political firestorms to try to understand the fundamentals underlying language-related issues.</p> <p><strong>Truth and politics</strong></p> <p>For example, one evening this fall the class met in a Stata Center lecture hall to discuss truth in the context of politics while enjoying a buffet dinner provided by&nbsp;Radius, an MIT initiative that&nbsp;fosters greater engagement with&nbsp;ethics, and which served as co-sponsor&nbsp;of the course.<br /> <br /> Rather than debating the latest headline-grabbing bit of news, however, Khoo introduced the class to the liar paradox. He wrote “This sentence is false” on the whiteboard and examined the challenges the sentence presents. “Things shouldn’t be both true and false. As soon as you talk about truth you come across some interesting paradoxes in philosophy,” says Khoo.<br /> <br /> While belief is not the same thing as truth, and lying is commonplace, human interactions depend on the norm that people generally assert things that they know to be true, Khoo says. “Under what conditions would it be reasonable to try to get your belief into someone else’s mind just by asserting it if that rule weren’t widely adhered to?” asks Khoo. “Violations of these rules can’t be the norm, because if they were it’s hard to see how we could communicate at all.”<br /> <br /> Nevertheless, Khoo notes, “There are rewards for flouting these norms. This is where we’re going to get political.” Citing an article by Lynch, he points out that a politician who contradicts himself can actually win people over on both sides of an issue — because, Khoo says, “Listeners can believe what they want.”<br /> <br /> “The danger emerges when distrust undercuts the sources for truth in society. You give up on the truth and retreat to subjectivism: ‘All I know is the content of my own mind.’ That’s paving the path to authoritarianism. The point of democracy is to think about issues yourself,” says Khoo.</p> <p><strong>Applying philosophy to life</strong></p> <p>Students say the class has enriched their thinking about topical issues. “I talk about the subjects and the ideas we talk about in the class with people in my dorm,” says Lawson Kosulic, a junior double majoring in philosophy and physics. “Many of the frameworks we use in philosophy of language are very applicable to real-life situations.”<br /> <br /> For example, the class discussed the effects that hate speech and subtler forms of discriminations can have on human behaviors. “Some people feel silenced by what other people are saying. This restricts the amount of information and perspective we get,” says Kosulic.</p> <p>Edwards says, “We spent many hours approaching how we communicate our values and what effect communication has on our democracy. Finally, we discussed how much credence we give our information ... and whether academia offers us any factual security.”<br /> <br /> The question of what to do about the issues raised in the class is answered differently by different philosophers, but Khoo says his point was not to give students answers but to provide resources for them to think through questions on their own. “The goal of the course is to create an open and friendly environment for students to talk about these issues,” says Khoo.<br /> <br /> Kosulic says the class has gotten him thinking about how he uses language himself. “This is why I’m so interested in philosophy. When you take these classes, you see it takes something subjective and emotion-filled and makes it more structured. It lets you look at it in a logical framework,” says Kosulic. “When I go to class it leaves me feeling inspired. Every time there’s a new piece to the puzzle.”</p> “The danger emerges when distrust undercuts the sources for truth in society,” says Justin Khoo, an assistant professor of philosophy. “You give up on the truth and retreat to subjectivism: ‘All I know is the content of my own mind.’ That’s paving the path to authoritarianism. The point of democracy is to think about issues yourself.”Photo: Allegra BovermanSHASS, Democracy, Faculty, Language, Philosophy, Students, Politics, Profiles, Classes and programs SHASS Research Fund names 10 recipients for 2018 Wed, 03 Jan 2018 08:35:01 -0500 School of Humanities, Arts, and Social Sciences <p>The annual SHASS Research Fund supports MIT research&nbsp;in the humanities, arts, and&nbsp;social sciences that shows promise of making an important contribution to the proposed area of activity. The 10 recipients for 2018 are:</p> <p><a href="">Nikhil Agarwal</a><strong>,</strong>&nbsp;assistant professor of economics:<strong>&nbsp;</strong>The near-universal coverage of dialysis treatments under Medicare, including for people under age 65, is unique in the U.S. health care system.&nbsp;Agarwal plans to use his SHASS research funding to analyze previously collected data to explore whether and how Medicare reimbursement rates affect the quiality of dialysis care and patient outcomes.<br /> <br /> <a href="">Charlotte Brathwaite</a>, assistant professor of&nbsp;music and theater arts:<strong>&nbsp;</strong>SHASS research funding will support "Forgotten Paradise: Grazettes Sun," a film project by director Brathwaite. Inspired by being united with her estranged brother for the first time, Brathwaite plans to take a small crew on a research trip to the Gold Coast (Ghana, Benin, and Togo) to excavate the jigsaw puzzle of history and memory&nbsp;and to identify locations significant to her own ancestry and the trans-Atlantic slave trade.<br /> <br /> <a href="">Sarah Brown</a>, director of design for music and theater arts:&nbsp;SHASS research funding will allow Brown to join the production of&nbsp;Gregory Spears’ opera, "Fellow Travelers," which dramatizes the lives of Americans whose careers were ended and lives were transformed during the "Lavender Scare," a period in the Cold War when LGBTQ people were expelled&nbsp;from the federal government because of their sexual identities.<br /> <br /> <a href="">Lerna Ekmekcioglu</a>, associate professor of&nbsp;history: Ekmekcioglu's funding will support her ongoing book and digital humanities project, "Feminism in Armenian: An Interpretive Anthology and a Digital Archive." With Melissa Bilal, a visiting scholar with MIT History, Ekmekcioglu traces the development of Armenian feminist thought&nbsp;from the 1860s to the 1960s.&nbsp;It will be the first collection in English on the topic.<br /> <br /> <a href="">Malick Ghachem</a>, associate professor of history:&nbsp;Ghachem's book on the rise of plantation capitalism in Haiti during the 1720s, "The Old Regime and the Haitian Revolution," will be translated into French with the support of SHASS research funding, making the work available to Francophone scholars in France and elsewhere in the French-speaking world. Funding will also allow&nbsp;Ghachem&nbsp;to present his research in France upon its publication by Éditions Karthala and the Centre international de recherches sur les esclavages.<br /> <br /> <a href="">Frederick Harris, Jr.</a>, director of&nbsp;wind and jazz ensembles for music and theater arts: With the support of SHASS research funding, Harris plans to begin researching the life and musical career of Herb Pomeroy (1930-2007) toward a biography with the&nbsp;working title of "It’s the Note You Don’t Play: The musical life of Herb Pomeroy." In addition to portraying Pomeroy's personal life, this book will analyze the three major areas of his musicianship: trumpeter, director/conductor, and educator.<br /> <br /> <a href="">Mark Harvey</a>, senior lecturer in music and theater arts: SHASS Research Fund support will enable the recording and production of a new album of original compositions by Harvey, all performed with the Aardvark Jazz Orchestra. The centerpiece will be “Swamp-a-Rama,” a composition at turns satirical and serious that responds to the current sociopolitical climate in the United States.&nbsp;</p> <p><a href="">Sabine Iatridou</a>, professor of linguistics:&nbsp;In Dutch and German, question words such as “what”&nbsp;are identical to existential words such as “something." Why does a single word have these two different meanings? Which meaning came first in the development of the language?&nbsp;What does that tell us about the development of language more generally?&nbsp;Iatridou will explore these questions with the support of SHASS research funding in coordination with colleagues from the University of Amsterdam.<br /> <br /> <a href="">Seth Mnookin</a>, professor of comparative media studies/writing: Funding will support a new book focused on the cultural, historical, and scientific underpinnings of how we age, as well as on research efforts designed to extend both lifespan and healthspan.&nbsp;In addition to providing a detailed overview of research that could reframe how we think about aging, the book will offer readers a guide to what age-related issues can be mitigated by changes to lifestyle, medical interventions, or pharmacological interventions — and which paths to avoid.<br /> <br /> <a href="">Ariel White</a>, assistant professor of political science:&nbsp;With an unprecedented amount of material, White and her colleagues will use a textual analysis tool to analyze the language used to report on crime, asking whether and to what extent&nbsp;local media outlets focus mostly on crimes committed by nonwhite suspects. They will also analyze the relationship between reporting trends and actual crime statistics to see whether these publications accurately reflect the level and type of crime.</p> <p>MIT's&nbsp;School of Humanities, Arts, and Social Sciences is home to research that has a global impact, and to graduate programs recognized as among the finest in the world. The school's research portfolio includes international studies, linguistics, economics, poverty alleviation, history, literature, anthropology, digital humanities, philosophy, global studies and languages, music and theater, writing, political science, security studies, women's and gender studies, and comparative media studies. MIT's SHASS&nbsp;research helps alleviate poverty; safeguard elections; steer economies; understand the past and present; assess the impact of new technologies; understand human language; create new forms at the juncture of art and science; and inform policy and cultural mores on issues including justice, healthcare, energy, climate, education, work and manufacturing, inclusion, and economic equity.</p> The SHASS Research Fund supports MIT research that shows promise of making an important contribution in the humanities, arts, or social sciences. Image: SHASS Communications SHASS, Faculty, Awards, honors and fellowships, Economics, Music, Theater, History, Linguistics, Comparative Media Studies/Writing, Science writing, Political science, Research, Grants, Funding Connecting through conversation Whether in Cambridge or Shanghai, MIT senior Joshua Charles Woodard seeks to learn from others’ perspectives and challenge his own. Wed, 01 Nov 2017 00:00:00 -0400 Fatima Husain | MIT News Office <p>Last year, during a reception on campus, MIT senior Joshua Charles Woodard was introduced to Claire Conceison, the Quanta Professor of Chinese Culture and professor of theater arts. The two proceeded to have a conversation in Mandarin, Woodard’s minor, which ended with an on-the-spot invitation for Woodard to visit Shanghai and study Beijing opera for two weeks with a small group of her students.</p> <p>Woodard savored the experience and, as he now considers a career in diplomacy and East Asian affairs, still marvels at how one discussion had such a significant impact on his world view and his plans for the future.</p> <p>The story may not be surprising to those who know Woodard, however, because striking up conversations to share different perspectives is one of his many pastimes. Whether making friends in foreign countries, discussing Institute policies as a member of the eight-student advisory cabinet convened by MIT President L. Rafael Reif, or serving as co-chair of the student community and living group Chocolate City, Woodard is always up for a discussion that leads to learning.</p> <p>He dreams of a “willingness to connect the dots,” for people to acknowledge commonalities between different cultures despite nationalistic identities in an increasingly globalized world. “If I don’t reach across the aisle,” he says, “No one is going to.”</p> <p><strong>Exposing kids to STEM</strong></p> <p>Long before he set his eyes on policy and diplomacy, Woodard, who grew up on the South Side of Chicago, envisioned a future in engineering at MIT.</p> <p>Childhood experiences with LEGO robotics and Northwestern University game design courses gave Woodard a foundation in STEM. By the time he was in the seventh grade, he knew he wanted to attend MIT for engineering.</p> <p>However, he acknowledges most students from his community don’t have those childhood experiences that enable them to have similar goals. “I had a chance that others didn’t have,” he says, “Anybody can do what I’ve done, but it’s only a matter of exposure and getting to [students] early.”</p> <p>While a junior at MIT, the mechanical engineering major worked on making those opportunities more available to underrepresented minorities, by co-founding the MIT BoSTEM Scholars Academy. The four-week summer program gives high school students valuable experiences in STEM and aims to put them on the path to MIT and other universities. Woodard worked with his co-founder Javier Weddington to raise $18,000 to fund the program, which debuted the summer of 2017 and will continue in the coming summers.</p> <p><strong>An expanding worldview</strong></p> <p>When Woodard was 16, he visited Paris on a school field trip. While he was purchasing items at a grocery store, a store employee complained to him about having to learn English to accommodate American tourists. The critique went further, and Woodard listened.</p> <p>For Woodard, the experience was “a pretty direct challenge to my world views [as an American], and the starting point for me to form a framework for how to study the world.”</p> <p>When he meets someone with a different cultural view, he opts to share his perspective with them and also listen to theirs. As an example, he describes his thought process when meeting someone who may not understand the meaning of the Black Lives Matter movement: “You have a set of experiences that didn’t expose you to this. If me taking half an hour to explain it to you can change your worldview so you can help someone else in the future, then it’s worth my time,” he says.</p> <p>During orientation for the class of 2021, Woodard gave a speech to the incoming class about the impact personal experiences can have on the MIT community. He told a story about the frustration he felt when multiple Black Lives Matter posters on campus were vandalized, but how caring gestures from others reminded him of the value of openly discussing personal and controversial issues. At the end of his speech, Woodard urged students to “open your minds and hearts so you can learn from the world” and ultimately better their communities.</p> <p>Chocolate City, a New House dorm community of current and former MIT students who share common backgrounds, interests, ethnicities, and experiences, further drives Woodard’s passion for conversation and leadership.</p> <p>Chocolate City encourages its members to use their experiences to benefit the people from their cultural communities. As co-chair, Woodard spearheaded local outreach initiatives, led the efforts to work with the Institute’s administration to preserve the group’s on-campus housing, and encouraged members to engage in leadership positions throughout campus.</p> <p>Along with the other students who made up the Presidential Advisory Cabinet, Woodard discussed with President Reif a wide range of issues affecting the MIT student body, from the current political climate, to campus infrastructure, to issues of particular importance to MIT communities of color such as recruitment and retention of faculty from underrepresented minority groups.</p> <p><strong>Diplomat in the making</strong></p> <p>Woodard’s affinity for connecting with others should serve him well in his area of interest after graduation, which is diplomacy and East Asian affairs, perhaps through work with the State Department.&nbsp;</p> <p>Describing his belief that all human beings share a common set of values and desires, Woodard says, “That’s the type of mentality you’d have to have in foreign diplomacy.” After graduating from MIT, he hopes to continue to learn about the United States’s impact on developed and developing countries.</p> <p>Woodard’s minor is Mandarin, which he says he chose over other languages due to China’s explosive economic growth and investment in America’s infrastructure.</p> <p>During his Mandarin class, he “can learn about culture and learn about the world,” he says. He enjoys taking part in discussions on topics ranging from China’s one-child policy, to tiger moms, to Chinese internet censorship.</p> <p>However, for Woodard, Mandarin is just the beginning.</p> <p>“I want to know four languages by the time I’m 30: English, Chinese, Arabic, and Spanish. In that order,” he says.</p> <p><strong>Shutterbug</strong></p> <p>Woodard’s skills with both people and cameras also led him to launch his own photography business, <a href="">JC Woodard Photography</a>, during his first year at MIT. Since then, he has taken photos for sororities, fraternities, MIT events, as well as professional headshots for the MIT community. Most recently, he covered comedian Hasan Minhaj’s performance at MIT’s Fall Festival this past September.</p> <p>“It’s exciting to capture people in moments of emotion,” he says. “Preserving the moment is so cool.”</p> <p>He suspects his photography skill benefits from his personality. “I don’t know how not to smile, so I’m really personable,” he says, “If you give me a camera, I’m like ‘Hey! Give me a smile, let’s take a picture!’”</p> Woodard is a co-founder of the MIT BoSTEM Scholars Academy. The four-week summer program gives high school students valuable experiences in STEM and aims to put them on the path to MIT and other universities. Woodard worked with his co-founder Javier Weddington to raise $18,000 to fund the program, which debuted the summer of 2017 and will continue in the coming summers. Photo: Ian MacLellanProfile, Students, Undergraduate, STEM education, Policy, SHASS, School of Engineering, Mechanical engineering, Diversity and inclusion, International relations, China, Language, Residential life MIT-Haiti, Google team up to boost education in Kreyòl Institute-led effort to create STEM lexicon is now available for global translation. Mon, 30 Oct 2017 23:59:19 -0400 Peter Dizikes | MIT News Office <p>In recent years, MIT scholars have helped develop a whole lexicon of science and math terms for use in Haiti’s Kreyòl language. Now a collaboration with Google is making those terms readily available to anyone — an important step in the expansion of Haitian Kreyòl for education purposes.</p> <p>The new project, centered around the MIT-Haiti Initiative, has been launched as part of an enhancement to the <a href="">Google Translate</a> program. Now anyone using Google Translate can find an extensive set of Kreyòl terms, including recent coinages, in the science, technology, engineering, and math (STEM) disciplines.</p> <p>“In the past five or six years, we’ve witnessed quite a paradigm shift in the way people in Haiti talk about and use Kreyòl,” says Michel DeGraff, a professor of linguistics at MIT and director of the <a href="">MIT-Haiti Initiative</a>. “Having Google Translate on board is going to be another source of intellectual, cultural, economic, and political capital for Kreyòl,” he notes, adding that the project will aid “anyone in the world now, if someone is interested in producing text in Kreyòl from any language.”</p> <p>The concept behind the project is straightforward. In Haiti, most education, especially technical education, traditionally has been conducted in French, even though Kreyòl is the native language of virtually all Haitian citizens. DeGraff, a native of Haiti, has long believed that Kreyòl should be a more central part of Haitian classroom education, and that native Kreyòl speakers would fare better academically and socioeconomically if it were.</p> <p>In 2013, MIT and Haiti signed a joint initiative to promote education in Kreyòl, in coordination with several Haitian universities and educational institutions. DeGraff has said that the project is intended to help Kreyòl-speaking students “build a solid foundation in their own language,” by using Kreyòl to translate digital learning tools for STEM topics and to develop related educational resources, including lesson plans, learning modules, evaluation instruments, and more.</p> <p>As part of the project, DeGraff and other colleagues in the MIT-Haiti Initiative, including STEM-focused faculty in Haiti, have developed new STEM-oriented coinages in Kreyòl, to help extend the scope of the language in technical fields.</p> <p>For instance, consider the English word “torque,” meaning the rotational force applied to an object. Paul Belony, the leader of the physics team for the MIT-Haiti Initiative, came up with a new translation of it in Kreyòl: the word “tòday,” taken from the Kreyòl verb, “tòde,” which refers to wringing out wet clothes, in the process of washing them. The wringing action is a visual example of torque in action, and the term derives from a verb that is common knowledge in Haiti.</p> <p>“It’s a new technical term,” DeGraff says. “It’s not at all what’s used in French for ‘torque,’ but it creates an image all Haitians will know, and then once you go into the physics of it, you can explain it in a way that makes sense.”</p> <p>Another example involves translating the English word “likelihood.” Although often used as a colloquial synonym for “probability,” it does not have the same technical meaning in math. In an effort to avoid this kind of confusion in Kreyòl, MIT-Haiti scholars have tried new terms for “likelihood,” currently using the Kreyòl word “panchan” (which translates as “leaning”), a suggestion made by Haitian psychologist and statistician Serge Madhere.</p> <p>To be sure, as MIT mathematics lecturer and MIT-Haiti member Jeremy Orloff observes, “the final Kreyòl term has not been fixed.” Still, he adds, when a new word for “likelihood” does become settled in Kreyòl, it figures to be “a big improvement on the unhelpful legacy from French or English,” which will help to avoid the conflation of “likelihood” and “probablility.”</p> <p>Those are precisely the new kinds of word that appear in the lexicon available through Google Translate. And while those terms are now being used in education programs within Haiti, their integration into Google’s powerful translation tool means they “will be re-usable by anyone with an interest in producing Kreyòl materials,” as DeGraff puts it.</p> <p>The collaboration between MIT-Haiti and Google is also an important step forward, as DeGraff sees it, in terms of adding new stakeholders to the project of disseminating Kreyòl widely.</p> <p>“It sends a message that we can no longer be stopped by this belief that Kreyòl is not for science,” DeGraff says. “That’s the key, because we feel we are at this tipping point where more and more people are accepting the language, at the highest levels of science and math education, and most everywhere else in Haitian society, and even outside Haiti — for example, right here in Boston where a new dual language program in English and Kreyòl is being launched by the Boston Public Schools system.”</p> <p>The MIT-Haiti Initiative has received funding from the U.S. National Science Foundation, MIT, the Wade Foundation, and the Open Society Foundation. Since the initiative’s inception in 2010, partner institutions in Haiti have included the Lekòl Kominotè Matènwa, the State University of Haiti, Université Caraïbe, École Supérieure d’Infotronique d’Haïti, Université Quisqueya, NATCOM, the Foundation for Knowledge and Liberty, Haiti’s Ministry of National Education and Professional Training, Haiti’s Prime Minister’s Office, the U.S. Embassy in Haiti, and Sûrtab.</p> Google has teamed with Michel DeGraff's Kreyòl-based STEM education project in Haiti, to add scientific and technical terms to the Kreyòl module of Google Translate. Image: MIT NewsLinguistics, Haiti, Social sciences, Humanities, education, Education, teaching, academics, MIT-Haiti Initiative, Language, STEM education, SHASS Analyzing the language of color Cognitive scientists find that people can more easily communicate warmer colors than cool ones. Mon, 18 Sep 2017 15:00:00 -0400 Anne Trafton | MIT News Office <p>The human eye can perceive millions of different colors, but the number of categories human languages use to group those colors is much smaller. Some languages use as few as three color categories (words corresponding to black, white, and red), while the languages of industrialized cultures use up to 10 or 12 categories.</p> <p>In a new study, MIT cognitive scientists have found that languages tend to divide the “warm” part of the color spectrum into more color words, such as orange, yellow, and red, compared to the “cooler” regions, which include blue and green. This pattern, which they found across more than 100 languages, may reflect the fact that most objects that stand out in a scene are warm-colored, while cooler colors such as green and blue tend to be found in backgrounds, the researchers say.</p> <p>This leads to more consistent labeling of warmer colors by different speakers of the same language, the researchers found.</p> <p>“When we look at it, it turns out it’s the same across every language that we studied. Every language has this amazing similar ordering of colors, so that reds are more consistently communicated than greens or blues,” says Edward Gibson, an MIT professor of brain and cognitive sciences and the first author of the study, which appears in the <em>Proceedings of the National Academy of Sciences</em> the week of Sept. 18.</p> <p>The paper’s other senior author is Bevil Conway, an investigator at the National Eye Institute (NEI). Other authors are MIT postdoc Richard Futrell, postdoc Julian Jara-Ettinger, former MIT graduate students Kyle Mahowald and Leon Bergen, NEI postdoc Sivalogeswaran Ratnasingam, MIT research assistant Mitchell Gibson, and University of Rochester Assistant Professor Steven Piantadosi.</p> <div class="cms-placeholder-content-video"></div> <p><strong>Color me surprised</strong></p> <p>Gibson began this investigation of color after accidentally discovering during another study that there is a great deal of variation in the way colors are described by members of the Tsimane’, a tribe that lives in remote Amazonian regions of Bolivia. He found that most Tsimane’ consistently use words for white, black, and red, but there is less agreement among them when naming colors such as blue, green, and yellow.</p> <p>Working with Conway, who was then an associate professor studying visual perception at Wellesley College, Gibson decided to delve further into this variability. The researchers asked about 40 Tsimane’ speakers to name 80 color chips, which were evenly distributed across the visible spectrum of color.</p> <p>Once they had these data, the researchers applied an information theory technique that allowed them to calculate a feature they called “surprisal,” which is a measure of how consistently different people describe, for example, the same color chip with the same color word.</p> <p>When a particular word (such as “blue” or “green”) is used to describe many color chips, then one of these chips has higher surprisal. Furthermore, chips that people tend to label consistently with just one word have a low surprisal rate, while chips that different people tend to label with different words have a higher surprisal rate. The researchers found that the color chips labeled in Tsimane’, English, and Spanish were all ordered such that cool-colored chips had higher average surprisals than warm-colored chips (reds, yellows, and oranges).</p> <p>The researchers then compared their results to data from the World Color Survey, which performed essentially the same task for 110 languages around the world, all spoken by nonindustrialized societies. Across all of these languages, the researchers found the same pattern.</p> <p>This reflects the fact that while the warm colors and cool colors occupy a similar amount of space in a chart of the 80 colors used in the test, most languages divide the warmer regions into more color words than the cooler regions. Therefore, there are many more color chips that most people would call “blue” than there are chips that people would define as “yellow” or “red.”</p> <p>“What this means is that human languages divide that space in a skewed way,” Gibson says. “In all languages, people preferentially bring color words into the warmer parts of the space and they don’t bring them into the cooler colors.”</p> <p><strong>Colors in the forefront</strong></p> <p>To explore possible explanations for this trend, the researchers analyzed a database of 20,000 images collected and labeled by Microsoft, and they found that objects in the foreground of a scene are more likely to be a warm color, while cooler colors are more likely to be found in backgrounds.</p> <p>“Warm colors are in the foreground, they’re all the stuff that we interact with and want to talk about,” Gibson says. “We need to be able to talk about things which are identical except for their color: objects.”</p> <p>Gibson now hopes to study languages spoken by societies found in snowy or desert climates, where background colors are different, to see if their color naming system is different from what he found in this study.</p> <p>Julie Sedivy, an adjunct associate professor of psychology at the University of Calgary, says the paper makes an important contribution to scientists’ ability to study questions such as how culture and language influence how people perceive the world.</p> <p>“It’s a big step forward in establishing a more rigorous approach to asking really important questions that in the past have been addressed in a scientifically flimsy way,” says Sedivy, who was not part of the research team. She added that this approach could also be used to study other attributes that are represented by varying numbers of words in different languages, such as odors, tastes, and emotions.</p> <p>The research was funded by the National Institutes of Health and the National Science Foundation.</p> MIT researchers have found that languages tend to divide the "warm" part of the color spectrum into more color words than the "cooler" regions, which makes communication of warmer colors more consistent. From left to right, this chart shows the order of most to least efficiently communicated colors, in English, Spanish, and Tsimane' languages. Courtesy of the researchers (edited by MIT News)Research, Brain and cognitive sciences, Language, Behavior, School of Science, National Institutes of Health (NIH), National Science Foundation (NSF) Times Higher Education names MIT No. 2 university worldwide for the arts and humanities Schools of Architecture and Planning; Humanities, Arts, and Social Sciences, and several centers are home to the arts and humanities at MIT. Mon, 18 Sep 2017 14:05:01 -0400 School of Humanities, Arts, and Social Sciences <p>The Times Higher Education 2018 World University Rankings has named MIT the No. 2 university in the world for arts and humanities. The two top&nbsp;ranked universities — Stanford University and MIT — are closely aligned in the evaluation metrics, which assess the arts and humanities at research-intensive universities across core missions, including research, teaching, and international outlook.</p> <p>The Times Higher Education World University Rankings is an annual publication of university rankings by <em>Times Higher Education, </em>a leading British education magazine. This ranking of MIT’s global role in the arts and humanities follows other recent recognition for the Institute’s contributions to individual fields and disciplines. The 2018 QS World University rankings, for example, name MIT as the world’s top university for architecture, economics, engineering, linguistics, and natural sciences, as well as the No. 1 university in the world overall.</p> <p>Of the <em>Times Higher Education</em> ranking, MIT President L. Rafael Reif said, “Perhaps because 'TECHNOLOGY' is carved in stone above MIT's front door, outsiders are not always prepared for the caliber of our research and education in the humanities and the arts. But it is the wisdom of the remarkable scholars in these fields, and lessons from their disciplines, that help our students develop fully into the creative citizens and inspired leaders they seek to become.”</p> <p>“The arts and humanities are deeply embedded at MIT, throughout our schools and departments and across the curriculum,” said Hashim Sarkis, dean of the School of Architecture and Planning. “I am delighted to see this broad strength recognized not only for its importance to MIT but for what it offers to the world.”<br /> <br /> Outstanding programs in the School of Humanities, Arts, and Social Sciences — including linguistics, history, philosophy, music and theater arts, literature, global studies and languages, media studies, and writing — sit alongside equally strong initiatives within the School of Architecture and Planning in the visual arts, architecture, design, and history, theory, and criticism. These disciplines are complemented by the Center for Art, Society and Technology (CAST), the office of the Arts at MIT, the MIT LIST Visual Arts Center, and the MIT Museum.</p> <p>“At MIT, we view the humanities and arts as essential for advancing knowledge, for educating young students, and for solving global issues,” said Melissa Nobles, Kenan Sahin dean of the School of Humanities, Arts, and Social Sciences. “The world’s problems are so complex they’re not only scientific and technological problems. They are as much human and moral problems.”</p> "100 percent of MIT undergraduates study the arts and humanities, joining our faculty in addressing some of the largest, most consequential human questions of our time," notes Melissa Nobles, Kenan Sahin dean of the School of Humanities, Arts, and Social Sciences.Photo: Madcoverboy/Wikimedia CommonsAwards, honors and fellowships, Rankings, Architecture, Arts, Design, Education, teaching, academics, Global Studies and Languages, History, Humanities, Innovation and Entrepreneurship (I&E), Literature, Linguistics, Philosophy, Comparative Media Studies/Writing, Theater, Music, SHASS, School of Architecture and Planning, Program in HTC American Sign Language at MIT A new club is spreading awareness of Deaf culture and American Sign Language. Thu, 14 Sep 2017 14:20:01 -0400 Maisie O’Brien | MindHandHeart Initiative <p>On Aug. 15, 25 MIT students and staff members were engaged in a lively lecture and discussion in Building 66 — but the room was completely silent. The teacher, Carol Zurek, wrote a word on the board and gestured to the class to repeat her movements.</p> <p>The students practiced the motion, incorporating them&nbsp;into their existing American Sign Language (ASL) vocabulary.</p> <p>The class was organized by the <a href="" target="_blank">American Sign Language and Deaf Culture Club at MIT</a> and offered free to members of the MIT community. This fall, the club is hosting non-credit, level one&nbsp;and level two&nbsp;courses, supported by the <a href="" target="_blank">MindHandHeart Innovation Fund</a> and <a href="" target="_blank">Graduate Student Life Grants</a>.</p> <div class="cms-placeholder-content-video"></div> <p>The club was officially formed in 2016, though MIT has offered ASL classes organized by the&nbsp;group of students and staff since 2014, with support from the <a href="" target="_blank">Media Lab</a>. The interest level in the courses has been impressive, with nearly 80 people signing up for classes that are capped at 25 students.</p> <p>“I think the interest speaks to the MIT community wanting to be open and inclusive,” said Barbara Johnson, a staff member in MIT Information Systems&nbsp;and Technology (IS&amp;T) who spearheaded the effort to bring ASL classes to MIT and is deaf.&nbsp;“The goals of the club are to spread awareness of Deaf culture and ASL as a language, and to get people to see deafness as another component of diversity.”</p> <p>The classes are structured in six week and eight week&nbsp;sessions and meet for approximately 90 minutes. Students use a book to guide them through learning vocabulary and basic conversational skills, and the instructor prompts students to engage in structured role playing.</p> <p>“The keystone of the class is that voices are off,” Johnson says. “This can be quite a jolt for some people — figuring out how to communicate using a visual language.”</p> <p>ASL Club president Gustavo Goretkin, a&nbsp;PhD student at the <a href="" target="_blank">Computer Science and Artificial Intelligence Laboratory</a>,&nbsp;says this is one of the most rewarding aspects of the class. “You might be conversing with a student and may even consider them a friend, but you’ll go months without hearing their voice,” he says. “It’s a very special and unique layer of connection to have with a person.”</p> <p>Kristy Johnson, a PhD student in the Media Lab and a founding officer in the club, also appreciates the community she has found through the ASL Club.</p> <p>“I have two kids and live off campus,” she explains. “So it’s been great for me to have an outlet that promotes so much engagement and connection. You have to really pay attention when you’re learning ASL. You have to look the other person in the eyes and focus on what they’re saying. You can’t be distracted or on your phone.”</p> <p>Johnson was inspired to sign up for the classes in order to better communicate with her son and because of her general interest in languages.</p> <p>“I use sign language extensively with my son, who has autism as well as many other special needs," she says. "He responds much more consistently to signing than he ever does to spoken speech. I also use it with my daughter, who just turned one.”</p> <p>Learning ASL has also influenced Johnson's&nbsp;research at the Media Lab.</p> <p>“It’s valuable to be able to communicate with lots of different people and types of learners,” she says. “The more you become aware of these different abilities, both through people like my son and through members of the Deaf community, the more we can invent for and with that community. If MIT wants to stay at the forefront of innovation, we have to be innovating for everybody.”</p> <p>In addition to the ASL classes, the club plans to host social events in the fall, including lunchtime practice sessions and field trips. Looking to the future, the group hopes that ASL will become a more permanent fixture on campus and that MIT will offer for-credit courses.</p> <p>To get a glimpse of the ASL Club in action, check out their <a href="">video</a> that was awarded first place in the MindHandHeart <a href="">“Heart at MIT”</a> video contest. To learn more and register for classes, visit the <a href="">ASL Club website</a>.</p> Members of the American Sign Language and Deaf Culture Club practice signs in a recent class.Photo: Maisie O'Brien/MindHandHeart InitiativeMindHandHeart, Classes and programs, Clubs and activities, Community, Student life, Language Robot learns to follow orders like Alexa ComText, from the Computer Science and Artificial Intelligence Laboratory, allows robots to understand contextual commands. Wed, 30 Aug 2017 10:00:00 -0400 Adam Conner-Simons | Rachel Gordon | CSAIL <p>Despite what you might see in movies, today’s robots are still very limited in what they can do. They can be great for many repetitive tasks, but their inability to understand the nuances of human language makes them mostly useless for more complicated requests.</p> <p>For example, if you put a specific tool in a toolbox and ask a robot to “pick it up,” it would be completely lost. Picking it up means being able to see and identify objects, understand commands, recognize that the “it” in question is the tool you put down, go back in time to remember the moment when you put down the tool, and distinguish the tool you put down from other ones of similar shapes and sizes.</p> <p>Recently researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have gotten closer to making this type of request easier: In a new paper, they present an Alexa-like system that allows robots to understand a wide range of commands that require contextual knowledge about objects and their environments. They've dubbed the system “ComText,” for “commands in context.”</p> <p>The toolbox situation above was among the types of tasks that ComText can handle. If you tell the system that “the tool I put down is my tool,” it adds that fact to its knowledge base. You can then update the robot with more information about other objects and have it execute a range of tasks like picking up different sets of objects based on different commands.</p> <p>“Where humans understand the world as a collection of objects and people and abstract concepts, machines view it as pixels, point-clouds, and 3-D maps generated from sensors,” says CSAIL postdoc Rohan Paul, one of the lead authors of the paper. “This semantic gap means that, for robots to understand what we want them to do, they need a much richer representation of what we do and say.”</p> <p>The team tested ComText on Baxter, a two-armed humanoid robot developed for Rethink Robotics by former CSAIL director Rodney Brooks.</p> <p>The project was co-led by research scientist Andrei Barbu, alongside research scientist Sue Felshin, senior research scientist Boris Katz, and Professor Nicholas Roy. They presented the paper at last week’s International Joint Conference on Artificial Intelligence (IJCAI) in Australia.</p> <p><strong>How it works</strong></p> <p>Things like dates, birthdays, and facts are forms of “declarative memory.” There are two kinds of declarative memory: semantic memory, which is based on general facts like the “sky is blue,” and episodic memory, which is based on personal facts, like remembering what happened at a party.</p> <p>Most approaches to robot learning have focused only on semantic memory, which obviously leaves a big knowledge gap about events or facts that may be relevant context for future actions. ComText, meanwhile, can observe a range of visuals and natural language to glean “episodic memory” about an object’s size, shape, position, type and even if it belongs to somebody. From this knowledge base, it can then reason, infer meaning and respond to commands.</p> <p>“The main contribution is this idea that robots should have different kinds of memory, just like people,” says Barbu. “We have the first mathematical formulation to address this issue, and we’re exploring how these two types of memory play and work off of each other.”</p> <p>With ComText, Baxter was successful in executing the right command about 90 percent of the time. In the future, the team hopes to enable robots to understand more complicated information, such as multi-step commands, the intent of actions, and using properties about objects to interact with them more naturally.</p> <p>For example, if you tell a robot that one box on a table has crackers, and one box has sugar, and then ask the robot to “pick up the snack,” the hope is that the robot could deduce that sugar is a raw material and therefore unlikely to be somebody’s “snack.”</p> <p>By creating much less constrained interactions, this line of research could enable better communications for a range of robotic systems, from self-driving cars to household helpers.</p> <p>“This work is a nice step towards building robots that can interact much more naturally with people,” says Luke Zettlemoyer, an associate professor of computer science at the University of Washington who was not involved in the research. “In particular, it will help robots better understand the names that are used to identify objects in the world, and interpret instructions that use those names to better do what users ask.”</p> <p>The work was funded, in part, by the Toyota Research Institute, the National Science Foundation, the Robotics Collaborative Technology Alliance of the U.S. Army, and the Air Force Research Laboratory.</p> ComText allows robots to understand contextual commands such as, “Pick up the box I put down.”Research, School of Engineering, Artificial intelligence, Data, Computer Science and Artificial Intelligence Laboratory (CSAIL), Computer science and technology, Electrical Engineering & Computer Science (eecs), Linguistics, Machine learning, Robotics, National Science Foundation (NSF) QS ranks MIT the world’s No. 1 university for 2017-18 Ranked at the top for the sixth straight year, the Institute also places first in 12 of 46 disciplines. Thu, 08 Jun 2017 00:00:00 -0400 MIT News Office <p>MIT has been ranked as the top university in the world in the latest QS World University Rankings. This marks the sixth straight year in which the Institute has been ranked in the No. 1 position.</p> <p>The full 2017-18 rankings — published by Quacquarelli Symonds, an organization specializing in education and study abroad — can be found at <a href=""></a>. The QS rankings were based on academic reputation, employer reputation, citations per faculty, student-to-faculty ratio, proportion of international faculty, and proportion of international students. MIT earned a perfect overall score of 100.</p> <p>MIT was also ranked the world’s top university in <a href="">12 of 46 disciplines</a> ranked by QS, as announced in March of this year.</p> <p>MIT received a No. 1 ranking in the following QS subject areas: Architecture/Built Environment; Linguistics; Computer Science and Information Systems; Chemical Engineering; Civil and Structural Engineering; Electrical and Electronic Engineering; Mechanical, Aeronautical and Manufacturing Engineering; Chemistry; Materials Science; Mathematics; Physics and Astronomy; and Economics.</p> <p>The Institute also ranked among the top five institutions worldwide in another seven QS disciplines: Art and Design (No. 2), Biological Sciences (No. 2), Earth and Marine Sciences (No. 5), Environmental Sciences (No. 3), Accounting and Finance (No. 2), Business and Management Studies (No. 4), and Statistics and Operational Research (No. 2).</p> Photo: AboveSummit with Christopher HartingRankings, Architecture, Chemical engineering, Chemistry, Civil and environmental engineering, Electrical Engineering & Computer Science (eecs), Economics, Linguistics, Materials Science and Engineering, DMSE, Mechanical engineering, Aeronautical and astronautical engineering, Physics, Business and management, Accounting, Finance, Arts, Design, Mathematics, EAPS, School of Architecture and Planning, SHASS, School of Science, School of Engineering, Sloan School of Management Articles of faith A new study of the words “a” and “the” sheds light on language acquisition. Thu, 06 Apr 2017 00:00:00 -0400 Peter Dizikes | MIT News Office <p>If you have the chance, listen to a toddler use the words “a” and “the” before a noun. Can you detect a pattern? Is he or she using those two words correctly?</p> <p>And one more question: When kids start using language, how much of their know-how is intrinsic, and how much is acquired by listening to others speak?</p> <p>Now a study co-authored by an MIT professor uses a new approach to shed more light on this matter — a central issue in the area of language acquisition.</p> <p>The results suggest that experience is an important component of early-childhood language usage although it doesn’t necessarily account for all of a child’s language facility. Moreover, the extent to which a child learns grammar by listening appears to change over time, with a large increase occurring around age 2 and a leveling off taking place in subsequent years.</p> <p>“In this view, adult-like, rule-based [linguistic] development is the end-product of a construction of knowledge,” says Roger Levy, an MIT professor and co-author of a new paper summarizing the study. Or, as the paper states, the findings are consistent with the idea that children “lack rich grammatical knowledge at the outset of language learning but rapidly begin to generalize on the basis of structural regularities in their input.”</p> <p>The paper, “The Emergence of an Abstract Grammatical Category in Children’s Early Speech,” appears in the latest issue of <em>Psychological Science</em>. The authors are Levy, a professor in MIT’s Department of Brain and Cognitive Sciences; Stephan Meylann of the University of California at Berkeley; Michael Frank of Stanford University; and Brandon Roy of Stanford and the MIT Media Lab.</p> <p><strong>Learning curve</strong></p> <p>Studying how children use terms such as “a dog” or “the dog” correctly can be a productive approach to language acquisition, since children use the articles “a” and “the” relatively early in their lives and tend to use them correctly. Again, though: Is that understanding of grammar innate or acquired?</p> <p>Some previous studies have examined this specific question by using an “overlap score,” that is, the proportion of nouns that children use with both “a” and “the,” out of all the nouns they use. When children use both terms correctly, it indicates they understand the grammatical difference between indefinite and definite articles, as opposed to cases where they may (incorrectly) think only one or the other is assigned to a particular noun.</p> <p>One potential drawback to this approach, however, is that the overlap score might change over time simply because a child might hear more article-noun pairings, without fully recognizing the grammatical distinction between articles.</p> <p>By contrast, the current study builds a statistical model of language use that incorporates not only child language use but adult language use recorded around children, from a variety of sources. Some of these are publicly available copora of recordings of children and caregivers; others are records of individual children; and one source is the “Speechome” experiment conducted by Deb Roy of the MIT Media Lab, which features recordings of over 70 percent of his child’s waking hours.</p> <p>The Speechome data, as the paper notes, provides some of the strongest evidence yet that “children’s syntactic productivity changes over development” — that younger children learn grammar from hearing it, and do so at different rates during different phases of early childhood.</p> <p>“I think the method starts to get us traction on the problem,” Levy says. “We saw this as an opportunity both to use more comprehensive data and to develop new analytic techniques.”</p> <p><strong>A work in progress</strong></p> <p>Still, as the authors note, a second conclusion of the paper is that more basic data about language development is needed. As the paper notes, much of the available information is not comprehensive enough, and thus “likely not sufficient to yield precise developmental conclusions.”</p> <p>And as Levy readily acknowledges, developing an airtight hypothesis about grammar acquisition is always likely to be a challenge.</p> <p>“We’re never going to have an absolute complete record of everything a child has ever heard,” Levy says.</p> <p>That makes it much harder to interpret the cognitive process leading to either correct or incorrect uses of, say, articles such as “a” and “the.” After all, if a child uses the phrase “a bus” correctly, it still might only be because that child has heard the phrase before and likes the way it sounds, not because he or she grasped the underlying grammar.</p> <p>“Those things are very hard to tease apart, but that’s what we’re trying to do,” Levy says. “This is only really an initial step.”</p> The extent to which a child learns grammar by listening appears to change over time, with a large increase occurring around age 2 and a leveling off taking place in subsequent years, according to a new study. Image: Jose-Luis Olivares/MITLearning, Linguistics, Research, Brain and cognitive sciences, School of Science