MIT News - Brain and cognitive sciences http://news.mit.edu/topic/mitbrain-cognitive-rss.xml MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community. en Tue, 28 Apr 2020 23:59:59 -0400 Studying the brain and supporting the mind http://news.mit.edu/2020/tarun-kamath-student-0430 At MIT, senior Tarun Kamath has explored neuroscience and science policy, while helping his peers find ways to reduce stress. Tue, 28 Apr 2020 23:59:59 -0400 Becky Ham | MIT News correspondent http://news.mit.edu/2020/tarun-kamath-student-0430 <p>“I’ve always been interested in science from a very young age, and my grandmother was actually a really big influence in that regard,” says Tarun Kamath, when asked about his academic inspirations. “She was a big believer in being very passionate and very good at what you might want to do.”</p> <p>Kamath is a senior majoring in brain and cognitive sciences as well as a master’s student in biological engineering. As a child, he did sudoku puzzles with his grandmother in the mornings. He received a big sudoku book from her for his eighth birthday, along with encouragement to watch videos of sudoku champions in order to learn from the very best.</p> <p>But when Kamath was in high school, his grandmother was diagnosed with atypical Parkinson’s disorder, and the “harrowing experience” of caring for a formerly vigorous and passionate woman became inspiration of a different sort, he says.</p> <p>“My family and I struggled to get access to the care she needed, spending months navigating the Medicaid system to afford her medications. Her doctors prescribed her more pills and patches, and yet when I talked to her she still confused me with my brother, her brother, even her neighbor,” Kamath wrote in a recent scholarship essay. “Maddeningly, from a glance, she seemed healthy, but internally, her mind, her independence, even her personality, was slipping away. I was shocked and frustrated by the inadequacy of available medical options and the difficulty we had accessing them. What was the point of medicine if it couldn’t help the people I loved?”</p> <p>At MIT, Kamath’s research has focused on neurogenerative disease biology in Bradley Hyman’s Lab at Massachusetts General Hospital, looking at toxic aggregations of the tau protein in Alzheimer’s disease. He has been working in the lab since the end of his first year. The 20-minute bike ride up the river to Mass General has been worth it, he says. “There’s a ton of amazing biomedical research happening around Boston, but what’s really special about a lab is the culture. It’s not just about what work you’re doing but it’s about the people that you do it with.”</p> <p>The lab has provided him with mentorship, the independence to start new projects, and most importantly, the ability to fail. “Especially as a student, it’s important to be in a place that not only encourages results but is accepting of failure, because 99 percent of science is failure,” Kamath explains. “I got lucky with this lab, and with what I’ve been able to learn about a field that is very personally relevant to me.”</p> <p>Since leaving campus in response to the Covid-19 pandemic, Kamath has been writing his master’s thesis and wrapping up some of his research projects, along with trying to keep his mind and body active. “I've been trying to watch videos to learn about topics I've been interested in but never had time to fully explore. I’m also video-calling and messaging many of my friends who are now scattered, to check in and see how they are all doing,” he says.</p> <p>Kamath is considering an MD/PhD program after graduation, in part because he wants to continue in research and because working closely with the neuropathology department at Mass General has helped him realize the “importance of the interplay between science and medicine.”</p> <p>His experiences with his grandmother, along with a key first-year class at MIT, also opened his eyes to the important role of health policy alongside the lab and the clinic. In the class 17.309 (Science, Technology and Public Policy), “we talked about a lot of case studies, and in lots of them people are not communicating effectively,” Kamath explains. “What was really fascinating was learning that yes, there is science, but science doesn’t translate into tangible things that can help people until the policy aspect happens.”</p> <p>“That’s sort of been a continuing theme of my MIT education, that you come into college with this preconceived notion of how systems work,” he adds, “and that can be small-scale, like how cells work, or it could be macroscale, like how countries work. And then you take classes and you realize that things are just way more complicated.”</p> <p>Over the summer of 2018, Kamath was an intern in the U.S. House of Representatives Committee on Ways and Means, as part of the <a href="http://web.mit.edu/summerwash/interns/2018/kamath.html">MIT Washington, D.C. Summer Internship Program</a>. He helped analyze bills and draft memos on methods to reduce fraud, waste, and abuse in Medicare, among other tasks.</p> <p>“There’s the old joke, that the opposite of progress is Congress, but there are a ton of things happening there. It was very encouraging, the constant back and forth and refining of ideas,” he says. “And from that I’m more willing to hear multiple sides of an argument in general, after that.”</p> <p>From 2017 to 2019, Kamath served as president of the MIT chapter of <a href="http://activeminds.mit.edu/">Active Minds</a>, a national mental health organization. There had been a chapter of the group at his high school, and he sought it out when he came to MIT “because I resonated a lot with their goal,” he says. Other peer support groups on campus “are sort of first aid for mental health. Somebody has a really stressful day and the peer supporter is there to help them through or to help them find a counselor if the stress is chronic,” he explains. “Active Minds is trying to prevent that day from happening in the first place. We try to encourage an environment in which people are less stressed or if they are stressed, to go talk to somebody.”</p> <p>College-age students have high rates of mental health disorders but one of the lowest rates of seeking help for those disorders, he adds. “There’s this huge disparity between what people are experiencing and what they tell other people they are experiencing, and so Active Minds tries to bridge that gap.”</p> <p>Kamath has never forgotten the support he received as a first-year from his <a href="http://zbt.mit.edu/">Zeta Beta Tau</a> fraternity class father, when he was having a “meltdown” over a differential equations assignment. “I didn’t even have to think about it, I just went to my class father’s room,” he recalls, “We chatted for a while and walked to the 24/7 Star Market to buy a couple of cold brew coffees. That had a big influence on me.”</p> <p>“I feel supported and encouraged by everybody here and there’s not a barrier to me asking for help. And that’s a culture that I wanted to continue and cultivate my junior year,” by becoming class father himself, Kamath says.</p> <p>One of the new things Kamath tried out when he first came to MIT was bhangra, the high-energy and competitive Punjabi folk dance. When he came up to the campus for a preview weekend in high school, a member of <a href="http://mirchi.mit.edu/">Mirchi,</a> MIT’s Bollywood fusion dance team, invited Kamath to one of his workshops. Kamath attended, although he had never danced before, and was hooked. He became a member of the <a href="http://bhangra.mit.edu/">MIT Bhangra Dance</a> team for two years.</p> <p>“I had been kind of afraid of performing, but it’s super-liberating, because in bhangra, it’s all about those seven minutes,” he says. “Win or lose, you put everything you’ve got into those seven minutes that you have on stage to perform, and you have to leave it all behind there. It’s an adrenaline rush!”</p> “I feel supported and encouraged by everybody here and there’s not a barrier to me asking for help,” MIT senior Tarun Kamath says.Photo: Ian MacLellanProfile, Students, Undergraduate, Brain and cognitive sciences, Biological engineering, School of Science, School of Engineering, Health, Medicine, Neuroscience, Policy, Alzheimer’s, Parkinson’s, Mental health, Government How could Covid-19 and the body’s immune response affect the brain? http://news.mit.edu/2020/how-could-covid-19-and-body-immune-response-affect-brain-0428 Picower Institute researchers are embarking on experiments to learn the mechanisms by which coronavirus might affect mental health. Tue, 28 Apr 2020 14:20:01 -0400 David Orenstein | Picower Institute for Learning and Memory http://news.mit.edu/2020/how-could-covid-19-and-body-immune-response-affect-brain-0428 <p>Although the most immediately threatening symptoms of Covid-19 are respiratory, neuroscientists are intently studying the pandemic from the perspective of the central nervous system. Clinical <a href="https://jamanetwork.com/journals/jamaneurology/fullarticle/2764549" rel="noopener noreferrer" target="_blank">research</a> and case <a href="https://www.cureus.com/articles/29414-neurological-complications-of-coronavirus-disease-covid-19-encephalopathy" rel="noopener noreferrer" target="_blank">reports</a> provide mounting evidence of impacts on the brain.</p> <div> <div> <div> <div> <div> <div> <div> <div> <div> <div> <div> <p>To get ahead of the possible long-term neurological problems from infection, multiple labs in The Picower Institute for Learning and Memory at MIT have begun pursuing research to determine whether and how it affects the brain, either directly or via the body’s heightened immune response. If it indeed does, that would be consistent with a history of reports that infections and immune system activity elsewhere in the body may have long-term impacts on mental health.</p> <p>While some scientists, for instance, <a href="https://www.the-scientist.com/features/can-the-flu-and-other-viruses-cause-neurodegeneration--65498" rel="noopener noreferrer" target="_blank">suspect a role</a> for infectious diseases in neurodegenerative disorders such as Parkinson’s disease or dementias, Picower Institute Member Gloria Choi and Harvard University immunologist Jun Huh have <a href="https://www.nature.com/articles/nature23909" rel="noopener noreferrer" target="_blank">meticulously traced</a> the pathway by which infection in a pregnant mother can lead to autism-like symptoms in her child and how, counterintuitively, infection in people with some autism spectrum disorders can temporarily <a href="https://www.nature.com/articles/s41586-019-1843-6" rel="noopener noreferrer" target="_blank">mitigate behavioral symptoms</a>. With deep expertise in neuro-immune interactions, as well as in the neural systems underlying the sense of smell, which is reported to be lost in some Covid-19 patients, Choi is planning several collaborative coronavirus studies.</p> <p>“With these various suspected neurological symptoms, if we can determine the underlying mechanisms by which the immune system affects the nervous system upon the infection with SARS-CoV-2 or related viruses, then the next time the pandemic comes we can be prepared to intervene,” says Choi, Samuel A. Goldblith Career Development Assistant Professor of Applied Biology in the Department of Brain and Cognitive Sciences.</p> <p>Like Choi, Picower Professor Li-Huei Tsai is also planning studies of the neurological impact of Covid-19. Tsai’s studies of Alzheimer’s disease include investigation of the blood-brain barrier, which tightly gates what goes into and out of the brain through the circulatory system. Technologies that her lab is developing with collaborators including MIT Institute Professor Robert Langer put the team in a unique position to assess whether and how coronavirus infection might overrun or evade that safeguard.</p> <p>“It is critical to know how the coronavirus might affect the brain,” Tsai says. “We are eager to bring our technology to bear on that question.”</p> <p><strong>Neuro-immune interactions</strong></p> <p>Choi is considering three lines of coronavirus research. Together with Picower Institute colleagues Newton Professor Mriganka Sur and Assistant Professor Kwanghun Chung, she hopes to tackle the question of anosmia, the loss of smell. Choi has studied the olfactory system in mice since her graduate and postdoc days. Moreover, a key finding of her neuroimmunology research is that because neurons express receptors for some of the signaling molecules, called cytokines, emitted by immune system cells, those interactions can directly affect neural development and activity. Working in mouse models, the team plans to ask whether such an impact, amid the immune system’s heightened response to Covid-19, is occurring in the olfactory system.</p> <p>Based on her and Huh’s studies of how maternal infection leads to autism-like symptoms in their offspring, they are concerned about two other aspects of coronavirus infection. One builds on the <a href="https://www.nature.com/articles/nature23910" rel="noopener noreferrer" target="_blank">finding</a> that the risk of offspring developing neurological problems depended strongly on the composition of the pregnant mother’s gut microbiome, the populations of bacteria that everyone harbors within their body. Given the wide range of outcomes seen among coronavirus patients, Choi and Huh wonder whether microbiome composition may play a role in addition to factors such as age or underlying health conditions. If that turns out to be the case, then tweaking the microbiome, perhaps with diet or probiotics, could improve outcomes. Working with colleagues in Korea and Japan, they are embarking on studies that will correlate microbiome composition in patients with their coronavirus outcomes.</p> <p>Over the longer term, Choi and Huh also hope to study whether Covid-19 infection among pregnant mothers presents an elevated risk of their offspring developing neurodevelopmental disorders like autism. In their research in mice, they have showed that given a particular maternal microbiome composition, immune cells in pregnant mice expressed elevated levels of the cytokine IL-17a. The molecule directly influenced fetal brain development, causing neural circuits governing autism-like behavioral symptoms to develop improperly. The pair aim to assess whether that could happen with coronavirus.</p> <p><strong>Covid-19 access to the brain</strong></p> <p>A major question is whether and how the SARS-CoV-2 virus can reach the central nervous system. Tsai’s lab may be able to find out using an advanced laboratory model of the blood-brain barrier (BBB), whose development has been led by postdoc Joel Blanchard. In a study in press, he has shown that the model made of human astrocytes, brain endothelial cells, and pericytes cultured from induced pluripotent stem cells closely mirrors properties of the natural BBB, such as permeability. In collaboration with Langer, the team is integrating the model with <span class="st">induced pluripotent stem cell</span>-derived cultures of neurons and other crucial brain support cells, like microglia and oligodendrocytes, on a chip (called a “<a href="https://picower.mit.edu/news/mit-sets-out-model-alzheimers-disease-complexity-chip">miBrain” chip</a>) to provide a sophisticated and integrated testbed of brain cell and cerebral vascular interaction.</p> <p>With the miBrain chip platform Tsai’s lab plans several experiments to better understand how the virus may put the brain at risk. In one, they can culture miBrain chips from a variety of individuals to see whether the virus is able to permeate the BBB equally or differently in those personalized models. They can also test another means of viral entry into the brain — whether the body’s immune system response (a so-called “<a href="http://news.mit.edu/2020/proteins-cytokine-storms-covid-19-0416" rel="noopener noreferrer" target="_blank">cytokine storm</a>”) increases the BBB’s permeability — by using blood serum from Covid-19 patients in the miBrainChip model.</p> <p>Yet another way the virus might spread in the nervous system is from neuron to neuron via their connections called synapses. With cultures of thousands of neurons, the miBrain chip platform could help them determine whether that’s the case, and whether specific kinds of neurons are more susceptible to becoming such conduits.</p> <p>Finally, there may be genetic differences that increase susceptibility to viral entry to the brain. Using technologies like CRISPR/Cas9, the team can engineer such candidate risk genes into the BBBs to test whether permeability varies. In their Alzheimer’s disease research, for example, they study whether variations in a gene called ApoE causes different degrees of amyloid proteins plaque buildup in the BBB model.</p> <p>The potential interactions among the virus, the microbiome, the immune system, and the central nervous system are likely to be highly complex, but with the expertise, the tools, and strong collaborations, Picower Institute researchers see ways to help illuminate the possible neurological effects of coronavirus infection.</p> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> Mounting evidence suggests that the SARS-CoV-2 virus affects the brain, as well as the lungs.Picower Institute, Brain and cognitive sciences, School of Science, Covid-19, Neuroscience, Mental health, Autism Muscle signals can pilot a robot http://news.mit.edu/2020/conduct-a-bot-muscle-signals-can-pilot-robot-mit-csail-0427 CSAIL&#039;s Conduct-A-Bot system uses muscle signals to cue a drone’s movement, enabling more natural human-robot communication. Mon, 27 Apr 2020 13:30:01 -0400 Rachel Gordon | CSAIL http://news.mit.edu/2020/conduct-a-bot-muscle-signals-can-pilot-robot-mit-csail-0427 <p>Albert Einstein famously postulated that “the only real valuable thing is intuition,” arguably one of the most important keys to understanding intention and communication.&nbsp;</p> <p>But intuitiveness is hard to teach — especially to a machine. Looking to improve this, a team from MIT’s <a href="http://csail.mit.edu">Computer Science and Artificial Intelligence Laboratory</a> (CSAIL) came up with a method that dials us closer to more seamless human-robot collaboration. The system, called “Conduct-A-Bot,” uses human muscle signals from wearable sensors to pilot a robot’s movement.&nbsp;</p> <p>“We envision a world in which machines help people with cognitive and physical work, and to do so, they adapt to people rather than the other way around,” says Professor Daniela Rus, director of CSAIL, deputy dean of research for the MIT Stephen A. Schwarzman College of Computing, and co-author on a paper about the system.&nbsp;</p> <p>To enable seamless teamwork between people and machines, electromyography and motion sensors are worn on the biceps, triceps, and forearms to measure muscle signals and movement. Algorithms then process the signals to detect gestures in real time, without any offline calibration or per-user training data. The system uses just two or three wearable sensors, and nothing in the environment — largely reducing the barrier to casual users interacting with robots.</p> <div class="cms-placeholder-content-video"></div> <p>While Conduct-A-Bot could potentially be used for various scenarios, including navigating menus on electronic devices or supervising autonomous robots, for this research the team used a Parrot Bebop 2 drone, although any commercial drone could be used.</p> <p>By detecting actions like rotational gestures, clenched fists, tensed arms, and activated forearms, Conduct-A-Bot can move the drone left, right, up, down, and forward, as well as allow it to rotate and stop.&nbsp;</p> <p>If you gestured toward the right to your friend, they could likely interpret that they should move in that direction. Similarly, if you waved your hand to the left, for example, the drone would follow suit and make a left turn.&nbsp;</p> <p>In tests, the drone correctly responded to 82 percent of over 1,500 human gestures when it was remotely controlled to fly through hoops. The system also correctly identified approximately 94 percent of cued gestures when the drone was not being controlled.</p> <p>“Understanding our gestures could help robots interpret more of the nonverbal cues that we naturally use in everyday life,” says Joseph DelPreto, lead author on the new paper. “This type of system could help make interacting with a robot more similar to interacting with another person, and make it easier for someone to start using robots without prior experience or external sensors.”&nbsp;</p> <p>This type of system could eventually target a range of applications for human-robot collaboration, including remote exploration, assistive personal robots, or manufacturing tasks like delivering objects or lifting materials.&nbsp;</p> <p>These intelligent tools are also consistent with social distancing — and could potentially open up a realm of future contactless work. For example, you can imagine machines being controlled by humans to safely clean a hospital room, or drop off medications, while letting us humans stay a safe distance.</p> <p>Muscle signals can often provide information about states that are hard to observe from vision, such as joint stiffness or fatigue.&nbsp;&nbsp;&nbsp;&nbsp;</p> <p>For example, if you watch a video of someone holding a large box, you might have difficulty guessing how much effort or force was needed — and a machine would also have difficulty gauging that from vision alone. Using muscle sensors opens up possibilities to estimate not only motion, but also the force and torque required to execute that physical trajectory.</p> <p>For the gesture vocabulary currently used to control the robot, the movements were detected as follows:&nbsp;</p> <ul> <li> <p>stiffening the upper arm to stop the robot (similar to briefly cringing when seeing something going wrong): biceps and triceps muscle signals;</p> </li> <li> <p>waving the hand left/right and up/down to move the robot sideways or vertically: forearm muscle signals (with the forearm accelerometer indicating hand orientation);</p> </li> <li> <p>fist clenching to move the robot forward: forearm muscle signals; and</p> </li> <li> <p>rotating clockwise/counterclockwise to turn the robot: forearm gyroscope.</p> </li> </ul> <p>Machine learning classifiers detected the gestures using the wearable sensors. Unsupervised classifiers processed the muscle and motion data and clustered it in real time to learn how to separate gestures from other motions. A neural network also predicted wrist flexion or extension from forearm muscle signals.&nbsp;&nbsp;</p> <p>The system essentially calibrates itself to each person's signals while they're making gestures that control the robot, making it faster and easier for casual users to start interacting with robots.</p> <p>In the future, the team hopes to expand the tests to include more subjects. And while the movements for Conduct-A-Bot cover common gestures for robot motion, the researchers want to extend the vocabulary to include more continuous or user-defined gestures. Eventually, the hope is to have the robots learn from these interactions to better understand the tasks and provide more predictive assistance or increase their autonomy.&nbsp;</p> <p>“This system moves one step closer to letting us work seamlessly with robots so they can become more effective and intelligent tools for everyday tasks,” says DelPreto. “As such collaborations continue to become more accessible and pervasive, the possibilities for synergistic benefit continue to deepen.”&nbsp;</p> <p>DelPreto and Rus presented the paper virtually earlier this month at the ACM/IEEE International Conference on Human Robot Interaction.</p> Lead author Joseph DelPreto controls a "Conduct-A-Bot" drone with his arm muscles.Photo courtesy of the researchers.Computer Science and Artificial Intelligence Laboratory (CSAIL), Robotics, Robots, Research, Wearable sensors, Neuroscience, Brain and cognitive sciences, Algorithms, Distributed Robotics Laboratory, School of Engineering, Artificial intelligence, Muscles, Human-computer interaction, MIT Schwarzman College of Computing, Electrical Engineering & Computer Science (eecs) Six from MIT elected to American Academy of Arts and Sciences for 2020 http://news.mit.edu/2020/six-american-academy-arts-sciences-2020-0424 Prestigious honor society announces more than 250 new members. Fri, 24 Apr 2020 00:00:00 -0400 MIT News Office http://news.mit.edu/2020/six-american-academy-arts-sciences-2020-0424 <p>Six MIT faculty members are among more than 250 leaders from academia, business, public affairs, the humanities, and the arts elected to the American Academy of Arts and Sciences, the academy announced Thursday.</p> <p>One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.</p> <p>Those elected from MIT this year are:</p> <ul> <li>Robert C. Armstrong, Chevron Professor in Chemical Engineering;</li> </ul> <ul> <li>Dave L. Donaldson, professor of economics;</li> </ul> <ul> <li>Catherine L. Drennan, professor of biology and chemistry;</li> </ul> <ul> <li>Ronitt Rubinfeld, professor of electrical engineering and computer science;</li> </ul> <ul> <li>Joshua B. Tenenbaum, professor of brain and cognitive sciences; and</li> </ul> <ul> <li>Craig Steven Wilder, Barton L. Weller Professor of History.</li> </ul> <p>“The members of the class of 2020 have excelled in laboratories and lecture halls, they have amazed on concert stages and in surgical suites, and they have led in board rooms and courtrooms,” said academy President&nbsp;David W. Oxtoby. “With today’s election announcement, these new members are united by a place in history and by an opportunity to shape the future through the academy’s work to advance the public good.”</p> <p>Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.</p> Faculty, Brain and cognitive sciences, Computer Science and Artificial Intelligence Laboratory (CSAIL), Chemical engineering, MIT Energy Initiative, History, Biology, Economics, School of Engineering, School of Science, School of Humanities Arts and Social Sciences, Awards, honors and fellowships Covid-19 calls MIT physician-scientist to front-line respiratory care http://news.mit.edu/2020/covid-19-calls-mit-physician-scientist-diane-chan-front-line-respiratory-care-0422 Neurologist and researcher Diane Chan pitches in to help New England get through tough times. Wed, 22 Apr 2020 14:20:01 -0400 David Orenstein | Picower Institute for Learning and Memory http://news.mit.edu/2020/covid-19-calls-mit-physician-scientist-diane-chan-front-line-respiratory-care-0422 <div> <div> <div> <div> <p>As both a neurologist who sees patients at Massachusetts General Hospital and a clinical postdoc conducting Alzheimer’s disease clinical studies at MIT’s Picower Institute, Diane Chan already has two demanding jobs. But as eastern New England’s need for Covid-19 care surged in late March, she volunteered to take on a third by joining the first wave of non-internal medicine doctors to be trained to evaluate patients in MGH’s respiratory illness clinics. The Covid-19 pandemic has called upon doctors and other health care workers, regardless of their usual specialty, to provide pulmonary care.</p> </div> </div> </div> </div> <p>On a recent afternoon Chan’s newfound duties meant venturing through drenching, wind-whipped downpours to begin a shift at a new respiratory clinic MGH opened in Chelsea, Massachusetts, a densely populated town next to Boston’s Logan airport. Chelsea has emerged as an <a href="https://www.bostonglobe.com/2020/04/15/nation/chelseas-spike-covid-19-cases-challenges-hospitals-state/" rel="noopener noreferrer" target="_blank">especially intense hotspot</a> of Covid-19 infection. The venue was new, but Chan had already been seeing respiratory illness patients at the hospital for about two weeks after receiving the training. She’s learned to identify upper and lower respiratory infections, to evaluate whether someone has the virus, and to determine whether those patients need to go to the emergency department, or can go home. She has also been trained in how to carefully doff and don personal protective equipment including masks, gloves, face shields, and gowns. Not too long ago, Chan’s residency included a full year of internal medicine, so with this new training and the constant presence of her internal-medicine colleagues, she says she feels well-prepared and protected for this urgent new work.</p> <p>“I’m grateful that I have skills to contribute during this time when the hospital needs our help and patients need our help,” Chan says. “I’m glad to have some capabilities I can use to try to help people. I’m really grateful to my internal medicine colleagues for training us to be able to do this in the respiratory clinic.”</p> <p>Of course, she has already been helping people in her two regular jobs. Typically twice a week at MGH, but more recently via videoconferencing, she sees neurology patients with conditions such as dementia. Even as they grapple with the unexpected need to use a new technology for remote care, she says, many patients are still happy to remain in touch with their doctor.</p> <p>At MIT's Picower Insitute for Learning and Memory her work in the lab of Picower Professor <a href="https://picower.mit.edu/faculty/li-huei-tsai">Li-Huei Tsai</a> consists of running a program of studies testing whether light and sound stimulation at the 40-hertz frequency of gamma brain rhythms can improve memory and cognition in patients with mild Alzheimer’s disease and reduce the condition’s progression. If so, the experimental method, called <a href="https://picower.mit.edu/innovations-inventions/genus">GENUS</a>, could potentially help millions of people. Patients participate in the randomized, controlled, and blinded study from home, but because the pandemic has delayed in-person visits, key evaluations are being delayed and so the study is being extended. Although she acknowledges feeling stress, for instance about the possibility of bringing the virus home to her family, what rises above for Chan is again a feeling of gratitude.</p> <p>“I feel grateful to the people in our study,” she says. “They have been working with us for a very long time. People were looking forward to their appointment that have had to be canceled. We are at the time when people in the control group would be switched to active stimulation. I’m very grateful to the whole group, everybody, that they would delay this time point and continue doing the stimulation at home until we reopen.”</p> <p>That will happen when the virus subsides. But for now, because it hasn’t, she is on the front lines of the battle against it. She is not alone. Another physician-scientist in the Picower Institute, Ravikiran Raju, is similarly helping out in the emergency department of a different Boston hospital.</p> <p>At MGH, Chan has seen respiratory patients from as far away as New Hampshire and Rhode Island. Most are referred by their primary care physician after a virtual meeting where they describe symptoms worrying enough to merit the in-person care that MGH still provides. At the clinics, doctors like Chan can listen to their lungs, check blood oxygen levels, take chest X-rays, and administer the nasalpharyngeal swabs needed for Covid-19 testing. Results come back the next day.</p> <p>The most critical decision, though, is whether patients require hospitalization right away. Most do not, but Chan has so far referred three patients for such emergency care. One was a man in his 30s who had already tested positive for the virus, but when he came to the clinic he struggled to speak in full sentences and his blood oxygen level was down. His X-ray showed a lot of pneumonia — Covid-19 infection tends to produce a distinct pattern in chest X-rays. Another was a woman who walked in, but just in the time she was at the clinic her blood oxygen reading dropped by more than 10 points.</p> <p>Some patients, however, do not have the virus. One, for instance, had a scary-sounding rattle in her voice and labored breathing, but clinical examination revealed that her lungs sounded and looked clear. Moreover, her blood oxygen saturation was at a healthy reading and her vital signs looked strong. Chan and her colleagues sent the woman home with some inhalers to help her breathing. The next day her Covid test came back negative, confirming their decision.</p> <p>Some patients turn out to be more nervous than sick. A man who came in hoping for a test turned out to have symptoms that were mild, at most. As they talked, Chan reassured him that he was doing well and doing the right things but that he didn’t need care at the clinic.</p> <p>The skill set of counseling patients, attending to their state of mind as well as their physical health, is a big part of neurology.</p> <p>“I think that I’m using some of those reassurance skills for some of these patients,” she says.</p> <p>Even in normal times, many patients need Chan’s reassurance. Now she is extending her care to many more that she never would have expected to see.</p> “I’m grateful that I have skills to contribute during this time when the hospital needs our help and patients need our help,” says Picower Institute researcher and Massachusetts General Hospital neurologist Diane Chan, who has been helping evaluate patients in respiratory illness clinics during the coronavirus pandemic.Photo courtesy of Diane Chan.Picower Institute, Brain and cognitive sciences, School of Science, Covid-19, Pandemic, Health care, Medicine, Community, graduate, Graduate, postdoctoral, Staff, Alzheimer's Examining the social impact of Covid-19 http://news.mit.edu/2020/saxe-lab-examines-social-impact-covid-19-0421 Survey from the Saxe Lab aims to measure the toll of social isolation during the Covid-19 pandemic. Tue, 21 Apr 2020 12:10:01 -0400 Julie Pryor | McGovern Institute for Brain Research http://news.mit.edu/2020/saxe-lab-examines-social-impact-covid-19-0421 <p>After being forced to relocate from their MIT dorms during the Covid19 crisis, two members of Professor <a href="https://mcgovern.mit.edu/profile/rebecca-saxe/">Rebecca Saxe</a>'s lab at the McGovern Institute for Brain Research are now applying their psychology skills to study the impact of mandatory relocation and social isolation on mental health.</p> <p>“When ‘social distancing’ measures hit MIT, we tried to process how the implementation of these policies would impact the landscape of our social lives,” explains graduate student <a href="https://mcgovern.mit.edu/2020/04/15/learning-from-social-isolation/">Heather Kosakowski</a>, who conceived of the study late one evening with undergraduate Michelle Hung. This landscape is broad, examining the effects of being uprooted and physically relocated from a place, but also changes in social connections, including friendships and even dating life.<br /> <br /> “I started speculating about how my life and the lives of other MIT students would change,” says Hung. “I was overwhelmed, sad, and scared. But then we realized that we were actually equipped to find the answers to our questions by conducting a study.”</p> <p>Together, Kosakowski and Hung developed a <a href="https://docs.google.com/forms/d/e/1FAIpQLSc91AVRZw-Qn6j7wsy5jOgocWNSMv3r-0MNE_Zg-oCBhlVyiA/viewform">survey</a> to measure how the social behavior of MIT students, postdocs, and staff is changing over the course of the pandemic. Survey questions were designed to measure loneliness and other aspects of mental health. The survey was sent to members of the MIT community and shared on social media in mid-March, when the pandemic hit the United States, and MIT made the unprecedented decision to send students home, shift to online instruction, and dramatically ramp down operations on campus.</p> <p>More than 500 people responded to the initial survey, ranging in age from 18 to 60, living in cities and countries around the world. Many but not all of those who responded were affiliated with MIT. Kosakowski and Hung are sending follow-up surveys to participants every two weeks and the team plans to collect data for the duration of the pandemic.</p> <p>“Throwing myself into creating the survey was a way to cope with feeling sad about leaving a community I love,” explains Hung, who flew home to California in March and admits that she struggles with feelings of loneliness now that she’s off campus.</p> <p>Although it is too soon to form any conclusions about their research, Hung predicts that feelings of loneliness may actually diminish over the course of the pandemic.</p> <p>“Humans have an impressive ability to adapt to change,” she says. “And I think in this virtual world, people will find novel ways to stay connected that we couldn’t have predicted.”</p> <p>Whether we find ourselves feeling more or less lonely as this Covid-19 crisis comes to an end, both Kosakowski and Hung agree that it will fundamentally change life as we know it.</p> <p>The Saxe lab seeks additional survey participants. To learn more about this study or to participate in the survey, <a href="https://docs.google.com/forms/d/e/1FAIpQLSc91AVRZw-Qn6j7wsy5jOgocWNSMv3r-0MNE_Zg-oCBhlVyiA/viewform">click here</a>.<br /> &nbsp;</p> McGovern scientists are studying the toll of social isolation during the COVID-19 pandemic.Photo: Michelle HungMcGovern Institute, Brain and cognitive sciences, School of Science, Research, Covid-19, Pandemic, Community, Psychology, Social networks, Technology and society, Mental health, Wellbeing Three from MIT awarded 2020 Guggenheim Fellowships http://news.mit.edu/2020/three-mit-awarded-2020-guggenheim-fellowships-0414 MIT professors Sabine Iatridou, Jonathan Gruber, and Rebecca Saxe have been selected to pursue their work “under the freest possible conditions.” Tue, 14 Apr 2020 13:10:01 -0400 Julie Pryor | McGovern Institute for Brain Research http://news.mit.edu/2020/three-mit-awarded-2020-guggenheim-fellowships-0414 <p>MIT faculty members Sabine Iatridou, Jonathan Gruber, and Rebecca Saxe are among 175 scientists, artists, and scholars awarded 2020 fellowships from the John Simon Guggenheim Foundation. Appointed on the basis of prior achievement and exceptional promise, the 2020 Guggenheim Fellows were selected from almost 3,000 applicants.</p> <p>“It’s exceptionally encouraging to be able to share such positive news at this terribly challenging time” says Edward Hirsch, president of the foundation. “A Guggenheim Fellowship has always offered practical assistance, helping fellows do their work, but for many of the new fellows, it may be a lifeline at a time of hardship, a survival tool as well as a creative one.”</p> <p>Since 1925, the foundation has granted more the $375 million in fellowships to over 18,000 individuals, including Nobel laureates, Fields medalists, poets laureate, and winners of the Pulitzer Prize, among other internationally recognized honors. This year’s MIT recipients include a linguist, an economist, and a cognitive neuroscientist.</p> <p>Sabine Iatridou is professor of linguistics in MIT's Department of Linguistics and Philosophy. Her work focuses on syntax and the syntax-semantics interface, as well as comparative linguistics. She is the author and coauthor of a series of innovative papers about tense and modality that opened up whole new domains of research for the field. Since those publications, she has made foundational contributions to many branches of linguistics that connect form with meaning. She is the recipient of the National Young Investigator Award (USA), of an honorary doctorate from the University of Crete in Greece, and of an award from the Royal Dutch Academy of Sciences. She was elected fellow of the Linguistic Society of America. She is co-founder and co-director of the CreteLing Summer School of Linguistics. &nbsp;</p> <p>Jonathan Gruber is the Ford Professor of Economics at MIT, the director of the Health Care Program at the National Bureau of Economic Research, and the former president of the American Society of Health Economists. He has published more than 175 research articles, has edited six research volumes, and is the author of “Public Finance and Public Policy,” a leading undergraduate text; “Health Care Reform,” a graphic novel; and “Jump-Starting America: How Breakthrough Science Can Revive Economic Growth and the American Dream.” In 2006 he received the American Society of Health Economists Inaugural Medal for the best health economist in the nation aged 40 and under. He served as deputy sssistant secretary for economic policy at the U.S. Department of the Treasury. He was a key architect of Massachusetts' ambitious health reform effort, and became an inaugural member of the Health Connector Board, the main implementing body for that effort. He served as a technical consultant to the Obama administration and worked with both the administration and Congress to help craft the Affordable Care Act. In 2011, he was named “One of the Top 25 Most Innovative and Practical Thinkers of Our Time” by <em>Slate</em> magazine.</p> <p>Rebecca Saxe is an associate investigator of the McGovern Institute and the John W. Jarve (1978) Professor in Brain and Cognitive Sciences. She studies human social cognition, using a combination of behavioral testing and brain imaging technologies. She is best known for her work on&nbsp;brain regions specialized for abstract concepts such as “theory of mind” tasks that involve understanding the mental states of other people. She also studies the development of the human brain during early infancy. She obtained her PhD from MIT and was a Harvard University junior fellow before joining the MIT faculty in 2006. Saxe was chosen in 2012 as a Young Global Leader by the World Economic Forum, and she received the 2014 Troland Award from the National Academy of Sciences. Her TED Talk, “How we read each other’s minds” has been viewed over 3 million times.</p> <p>“As we grapple with the difficulties of the moment, it is also important to look to the future,” says Hirsch. “The artists, writers, scholars, and scientific researchers supported by the fellowship will help us understand and learn from what we are enduring individually and collectively, and it is an honor for the foundation to help them do their essential work.”</p> 2020 John Simon Guggenheim Foundation Fellows include (left to right) Sabine Iatridou, professor of linguistics; Jonathan Gruber, professor of economics; and Rebecca Saxe, professor of cognitive neuroscience.Photos (l-r): Jon Sachs, Bill Greene, Allan AdamsLinguistics and Philosophy, Economics, McGovern Institute, Brain and cognitive sciences, School of Science, School of Humanities Arts and Social Sciences, Awards, honors and fellowships, Faculty Researchers achieve remote control of hormone release http://news.mit.edu/2020/remote-control-hormone-release-nanoparticles-0410 Using magnetic nanoparticles, scientists stimulate the adrenal gland in rodents to control release of hormones linked to stress. Fri, 10 Apr 2020 13:59:59 -0400 Anne Trafton | MIT News Office http://news.mit.edu/2020/remote-control-hormone-release-nanoparticles-0410 <p>Abnormal levels of stress hormones such as adrenaline and cortisol are linked to a variety of mental health disorders, including depression and posttraumatic stress disorder (PTSD). MIT researchers have now devised a way to remotely control the release of these hormones from the adrenal gland, using magnetic nanoparticles.</p> <p>This approach could help scientists to learn more about how hormone release influences mental health, and could eventually offer a new way to treat hormone-linked disorders, the researchers say.</p> <p>“We’re looking how can we study and eventually treat stress disorders by modulating peripheral organ function, rather than doing something highly invasive in the central nervous system,” says Polina Anikeeva, an MIT professor of materials science and engineering and of brain and cognitive sciences.</p> <p>To achieve control over hormone release, Dekel Rosenfeld, an MIT-Technion postdoc in Anikeeva’s group, has developed specialized magnetic nanoparticles that can be injected into the adrenal gland. When exposed to a weak magnetic field, the particles heat up slightly, activating heat-responsive channels that trigger hormone release. This technique can be used to stimulate an organ deep in the body with minimal invasiveness.</p> <p>Anikeeva and Alik Widge, an assistant professor of psychiatry at the University of Minnesota and a former research fellow at MIT’s Picower Institute for Learning and Memory, are the senior authors of the study. Rosenfeld is the lead author of the paper, which appears today in <em>Science Advances</em>.</p> <p><strong>Controlling hormones</strong></p> <p>Anikeeva’s lab has previously devised several novel magnetic nanomaterials, including particles that can <a href="http://news.mit.edu/2019/lipid-magent-deliver-drugs-0819">release drugs at precise times</a> in specific locations in the body.</p> <p>In the new study, the research team wanted to explore the idea of treating disorders of the brain by manipulating organs that are outside the central nervous system but influence it through hormone release. One well-known example is the hypothalamic-pituitary-adrenal (HPA) axis, which regulates stress response in mammals. Hormones secreted by the adrenal gland, including cortisol and adrenaline, play important roles in depression, stress, and anxiety.</p> <p>“Some disorders that we consider neurological may be treatable from the periphery, if we can learn to modulate those local circuits rather than going back to the global circuits in the central nervous system,” says Anikeeva, who is a member of MIT’s Research Laboratory of Electronics and McGovern Institute for Brain Research.</p> <p>As a target to stimulate hormone release, the researchers decided on ion channels that control the flow of calcium into adrenal cells. Those ion channels can be activated by a variety of stimuli, including heat. When calcium flows through the open channels into adrenal cells, the cells begin pumping out hormones. “If we want to modulate the release of those hormones, we need to be able to essentially&nbsp;modulate the influx of calcium into adrenal cells,” Rosenfeld says.</p> <p>Unlike previous research in Anikeeva’s group, in this study magnetothermal stimulation was applied to modulate the function of cells without artificially introducing any genes.</p> <p>To stimulate these heat-sensitive channels, which naturally occur in adrenal cells, the researchers designed nanoparticles made of magnetite, a type of iron oxide that forms tiny magnetic crystals about 1/5000 the thickness of a human hair. In rats, they found these particles could be injected directly into the adrenal glands and remain there for at least six months. When the rats were exposed to a weak magnetic field — about 50 millitesla, 100 times weaker than the fields used for magnetic resonance imaging (MRI) — the particles heated up by about 6 degrees Celsius, enough to trigger the calcium channels to open without damaging any surrounding tissue.</p> <p>The heat-sensitive channel that they targeted, known as TRPV1, is found in many sensory neurons throughout the body, including pain receptors. TRPV1 channels can be activated by capsaicin, the organic compound that gives chili peppers their heat, as well as by temperature. They are found across mammalian species, and belong to a family of many other channels that are also sensitive to heat.</p> <p>This stimulation triggered a hormone rush — doubling cortisol production and boosting noradrenaline by about 25 percent. That led to a measurable increase in the animals’ heart rates.</p> <p><strong>Treating stress and pain</strong></p> <p>The researchers now plan to use this approach to study how hormone release affects PTSD and other disorders, and they say that eventually it could be adapted for treating such disorders. This method would offer a much less invasive alternative to potential treatments that involve implanting a medical device to electrically stimulate hormone release, which is not feasible in organs such as the adrenal glands that are soft and highly vascularized, the researchers say.</p> <p>Another area where this strategy could hold promise is in the treatment of pain, because heat-sensitive ion channels are often found in pain receptors.</p> <p>“Being able to modulate pain receptors with this technique potentially will allow us to study pain, control pain, and have some clinical applications in the future, which hopefully may offer an alternative to medications or implants for chronic pain,” Anikeeva says. With further investigation of the existence of TRPV1 in other organs, the technique can potentially be extended to other peripheral organs such as the digestive system and the pancreas.</p> <p>The research was funded by the U.S. Defense Advance Research Projects Agency ElectRx Program, a Bose Research Grant, the National Institutes of Health BRAIN Initiative, and a MIT-Technion fellowship.</p> MIT engineers have developed magnetic nanoparticles (shown in white squares) that can stimulate the adrenal gland to produce stress hormones such as adrenaline and cortisol.Image: Courtesy of the researchersResearch, Mental health, Materials Science and Engineering, DMSE, Brain and cognitive sciences, McGovern Institute, Research Laboratory of Electronics, School of Science, School of Engineering, Nanoscience and nanotechnology Katie Collins, Vaishnavi Phadnis, and Vaibhavi Shah named 2020-21 Goldwater Scholars http://news.mit.edu/2020/three-mit-students-named-2020-goldwater-scholars-0409 Three MIT undergraduates who use computer science to explore human biology and health honored for their academic achievements. Thu, 09 Apr 2020 13:55:01 -0400 Fernanda Ferreira | School of Science http://news.mit.edu/2020/three-mit-students-named-2020-goldwater-scholars-0409 <p>MIT students Katie Collins, Vaishnavi Phadnis, and Vaibhavi Shah have&nbsp; been selected to receive a Barry Goldwater Scholarship for the 2020-21 academic year. Over 5,000 college students from across the United States were nominated for the scholarships, from which only 396 recipients were selected based on academic merit.&nbsp;</p> <p>The Goldwater scholarships have been conferred since 1989 by the Barry Goldwater Scholarship and Excellence in Education Foundation. These scholarships have supported undergraduates who go on to become leading scientists, engineers, and mathematicians in their respective fields. All of the 2020-21 Goldwater Scholars intend to obtain a doctorate in their area of research, including the three MIT recipients.&nbsp;</p> <p>Katie Collins, a third-year majoring in brain and cognitive sciences with minors in computer science and biomedical engineering, got involved with research in high school, when she worked on computational models of metabolic networks and synthetic gene networks in the lab of <a href="https://www.eecs.mit.edu/">Department of Electrical Engineering and Computer Science </a>Professor Timothy Lu at MIT. It was this project that led her to realize how challenging it is to model and analyze complex biological networks. She also learned that machine learning can provide a path for exploring these networks and understanding human diseases. This realization has coursed a scientific path for Collins that is equally steeped in computer science and human biology.</p> <p>Over the past few years, Collins has become increasingly interested in the human brain, particularly what machine learning can learn from human common-sense reasoning and the way brains process sparse, noisy data. “I aim to develop novel computational algorithms to analyze complex, high-dimensional data in biomedicine, as well as advance modelling paradigms to improve our understanding of human cognition,” explains Collins. In his letter of recommendation, Professor Tomaso Poggio, the Eugene McDermott Professor in the <a href="https://bcs.mit.edu/">Department of Brain and Cognitive Sciences</a> and one of Collins’ mentors, wrote, “It is very difficult to imagine a better candidate for the Goldwater fellowship.” Collins plans to pursue a PhD studying machine learning or computational neuroscience and to one day run her own lab. “I hope to become a professor, leading a research program at the interface of computer science and cognitive neuroscience.”</p> <p>Vaishnavi Phadnis, a second-year majoring in computer science and molecular biology, sees molecular and cellular biology as the bridge between chemistry and life, and she’s been enthralled with understanding that bridge since 7th grade, when she learned about the chemical basis of the cell. Phadnis spent two years working in a cancer research lab while still in high school, an experience which convinced her that research was not just her passion but also her future. “In my first week at MIT, I approached Professor Robert Weinberg, and I’ve been grateful to do research in his lab ever since,” she says.&nbsp;</p> <p>“Vaishnavi’s exuberance makes her a joy to have in the lab,” wrote Weinberg, who is the Daniel Ludwig Professor in the <a href="https://biology.mit.edu/">Department of Biology</a>. Phadnis is investigating ferroptosis, a recently discovered, iron-dependent form of cell death that may be relevant in neurodegeneration and also a potential strategy for targeting highly aggressive cancer cells. “She is a phenomenon who has vastly exceeded our expectations of the powers of someone her age,” Weinberg says. Phadnis is thankful to Weinberg and all the scientific mentors, both past and present, that have inspired her along her research path. Deciphering the mechanisms behind fundamental cellular processes and exploring their application in human diseases is something Phadnis plans to continue doing in her future as a physician-scientist after pursuing an MD/PhD. “I hope to devote most of my time to leading my own research group, while also practicing medicine,” she says.&nbsp;</p> <p>Vaibhavi Shah, a third-year studying <a href="https://be.mit.edu/">biological engineering</a> with a minor in <a href="https://sts-program.mit.edu/">science, technology and society</a>, spent a lot of time in high school theorizing ways to tackle major shortcomings in medicine and science with the help of technology. “When I came to college, I was able to bring some of these ideas to fruition,” she says, working with both the Big Data in Radiology Group at the University of California at San Francisco and the lab of Professor Mriganka Sur, the Newton Professor of Neuroscience in the Department of Brain and Cognitive Sciences.&nbsp;</p> <p>Shah is particularly interested in integrating innovative research findings with traditional clinical practices. According to her, technology, like computer vision algorithms, can be adopted to diagnose diseases such as Alzheimer’s, allowing patients to start appropriate treatments earlier. “This is often harder to do at smaller, rural institutions that may not always have a specialist present,” says Shah, and algorithms can help fill that gap. One of aims of Shah’s research is to improve the efficiency and equitability of physician decision-making. “My ultimate goal is to improve patient outcomes, and I aim to do this by tackling emerging scientific questions in machine learning and artificial intelligence at the forefront of neurology,” she says. The clinic is a place Shah expects to be in the future after obtaining her physician-scientist training, saying, “I hope to a practicing neurosurgeon and clinical investigator.”</p> <p>The Barry Goldwater Scholarship and Excellence in Education Program was established by Congress in 1986 to honor Senator Barry Goldwater, a soldier and statesman who served the country for 56 years. Awardees receive scholarships of up to $7,500 a year to cover costs related to tuition, room and board, fees, and books.</p> Left to right: Katie Collins, Vaishnavi Phadnis, and Vaibhavi Shah are three of the 396 undergraduates in the United States to receive 2020-21 Goldwater Scholarships. Photos courtesy of the students. School of Science, Brain and cognitive sciences, Electrical engineering and computer science (EECS), Biology, Biological engineering, School of Engineering, Students, Undergraduate, Awards, honors and fellowships, Technology and society Researching from home: Science stays social, even at a distance http://news.mit.edu/2020/researching-from-home-picower-science-stays-strong-even-at-distance-0407 Picower Institute researchers are advancing their work in many ways despite time away from the lab required to corral Covid-19. Tue, 07 Apr 2020 14:00:01 -0400 David Orenstein | Picower Institute for Learning and Memory http://news.mit.edu/2020/researching-from-home-picower-science-stays-strong-even-at-distance-0407 <p>With all but a skeleton crew staying home from each lab to minimize the spread of Covid-19, scores of Picower Institute researchers are immersing themselves in the considerable amount of scientific work that can done away from the bench. With piles of data to analyze; plenty of manuscripts to write; new skills to acquire; and fresh ideas to conceive, share, and refine for the future, neuroscientists have full plates, even when they are away from their, well, plates. They are proving that science can remain social, even if socially distant.</p> <p>Ever since the mandatory ramp down of on-campus research took hold March 20, for example, teams of researchers in the lab of <a href="https://picower.mit.edu/troy-littleton">Troy Littleton</a>, the Menicon Professor of Neuroscience, have sharpened their focus on two data-analysis projects that are every bit as essential to their science as acquiring the data in the lab in the first place. Research scientist Yulia Akbergenova and graduate student Karen Cunningham, for example, are poring over a huge amount of imaging data showing how the strength of connections between neurons, or synapses, mature and how that depends on the molecular components at the site. Another team, comprised of Picower postdoc Suresh Jetti and graduate students Andres Crane and Nicole Aponte-Santiago, is analyzing another large dataset, this time of gene transcription, to learn what distinguishes two subclasses of motor neurons that form synapses of characteristically different strength.</p> <p>Work is similarly continuing among researchers in the lab of <a href="https://picower.mit.edu/elly-nedivi">Elly Nedivi</a>, the William R. (1964) and Linda R. Young Professor of Neuroscience. Since heading home, Senior Research Support Associate Kendyll Burnell has been looking at microscope images tracking how inhibitory interneurons innervate the visual cortex of mice throughout their development. By studying the maturation of inhibition, the lab hopes to improve understanding of the role of inhibitory circuitry in the experience-dependent changes, or plasticity, and development of the visual cortex, she says. As she’s worked, her poodle Soma (named for the central body structure of a neuron) has been by her side.</p> <p>Despite extra time with comforts of home, though, it’s clear that nobody wanted this current mode of socially distant science. For every lab, it’s tremendously disruptive and costly. But labs are finding many ways to make progress nonetheless.</p> <p>“Although we are certainly hurting because our lab work is at a standstill, the Miller lab is fortunate to have a large library of multiple-electrode neurophysiological data,” says Picower Professor <a href="https://picower.mit.edu/earl-k-miller">Earl Miller</a>. “The datasets are very rich. As our hypotheses and analytical tools develop, we can keep going back to old data to ask new questions. We are taking advantage of the wet lab downtime to analyze data and write papers. We have three under review and are writing at least three more right now.”</p> <p>Miller is inviting new collaborations regardless of the physical impediment of social distancing. A recent lab meeting held via the videoconferencing app Zoom included MIT Department of Brain and Cognitive Sciences Associate Professor Ila Fiete and her graduate student, Mikail Khona. The Miller lab has begun studying how neural rhythms move around the cortex and what that means for brain function. Khona presented models of how timing relationships affect those waves. While this kind of an interaction between labs of the Picower Institute and the McGovern Institute for Brain Research would normally have taken place in person in MIT’s Building 46, neither lab let the pandemic get in the way.</p> <p>Similarly, the lab of <a href="https://tsailaboratory.mit.edu/li-huei-tsai/">Li-Huei Tsai</a>, Picower Professor and director of the Picower Institute, has teamed up with that of Manolis Kellis, professor in the MIT Computer Science and Artificial Intelligence Laboratory. They’re forming several small squads of experimenters and computational experts to launch analyses of gene expression and other data to illuminate the fate of individual cell types like interneurons or microglia in the context of the Alzheimer’s disease-afflicted brain. Other teams are focusing on analyses of questions such as how pathology varies in brain samples carrying different degrees of genetic risk factors. These analyses will prove useful for stages all along the scientific process, Tsai says, from forming new hypotheses to wrapping up papers that are well underway.</p> <p>Remote collaboration and communication are proving crucial to researchers in other ways, too, proving that online interactions, though distant, can be quite personally fulfilling.</p> <p>Nicholas DiNapoli, a research engineer in the lab of Associate Professor <a href="https://picower.mit.edu/kwanghun-chung">Kwanghun Chung</a>, is making the best of time away from the bench by learning about the lab’s computational pipeline for processing the enormous amounts of imaging data it generates. He’s also taking advantage of a new program within the lab in which Senior Computer Scientist Lee Kamentsky is teaching Python computer programming principles to anyone in the lab who wants to learn. The training occurs via Zoom two days a week.</p> <p>As part of a crowded calendar of Zoom meetings, or “Zeetings” as the lab has begun to call them, Newton Professor <a href="https://picower.mit.edu/mriganka-sur">Mriganka Sur</a> says he makes sure to have one-to-one meetings with everyone in the lab. The team also has organized into small subgroups around different themes of the lab’s research.</p> <p>But also, the lab has continued to maintain its cohesion by banding together informally creating novel work and social experiences.</p> <p>Graduate student Ning Leow, for example, used Zoom to create a co-working session in which participants kept a video connection open for hours at a time, just to be in each other’s virtual presence while they worked. Among a group of Sur lab friends, she read a paper related to her thesis and did a substantial amount of data analysis. She also advised a colleague on an analysis technique via the connection.</p> <p>“I’ve got to say that it worked out really well for me personally because I managed to get whatever I wanted to complete on my list done,” she says, “and&nbsp;there was also a sense of healthy accountability along with the sense of community.”</p> <p>Whether in person or via an officially imposed distance, science is social. In that spirit, graduate student K. Guadalupe "Lupe" Cruz organized a collaborative art event via Zoom for female scientists in brain and cognitive sciences at MIT. She took a photo of Rosalind Franklin, the scientist whose work was essential for resolving the structure of DNA, and divided it into nine squares to distribute to the event attendees. Without knowing the full picture, everyone drew just their section, talking all the while about how the strange circumstances of Covid-19 have changed their lives. At the end, they stitched their squares together to reconstruct the image.</p> <p>Examples abound of how Picower scientists, though mostly separate and apart, are still coming together to advance their research and to maintain the fabric of their shared experiences.</p> MIT Senior Research Associate Kendyll Burnell and her poodle Soma examine neural imaging data together at home during the Covid-19 pandemic. Image: Earl MillerPicower Institute, Brain and cognitive sciences, Computer Science and Artificial Intelligence Laboratory (CSAIL), Biology, School of Science, Covid-19, Neuroscience, Women in STEM, Community, Pandemic Neuroscientists find memory cells that help us interpret new situations http://news.mit.edu/2020/neuroscience-memory-cells-interpret-new-0406 Neurons that store abstract representations of past experiences are activated when a new, similar event takes place. Mon, 06 Apr 2020 11:00:00 -0400 Anne Trafton | MIT News Office http://news.mit.edu/2020/neuroscience-memory-cells-interpret-new-0406 <p>Imagine you are meeting a friend for dinner at a new restaurant. You may try dishes you haven’t had before, and your surroundings will be completely new to you. However, your brain knows that you have had similar experiences — perusing a menu, ordering appetizers, and splurging on dessert are all things that you have probably done when dining out.</p> <p>MIT neuroscientists have now identified populations of cells that encode each of these distinctive segments of an overall experience. These chunks of memory, stored in the hippocampus, are activated whenever a similar type of experience takes place, and are distinct from the neural code that stores detailed memories of a specific location.</p> <p>The researchers believe that this kind of “event code,” which they discovered in a study of mice, may help the brain interpret novel situations and learn new information by using the same cells to represent similar experiences.</p> <p>“When you encounter something new, there are some really new and notable stimuli, but you already know quite a bit about that particular experience, because it’s a similar kind of experience to what you have already had before,” says Susumu Tonegawa, a professor of biology and neuroscience at the RIKEN-MIT Laboratory of Neural Circuit Genetics at MIT’s Picower Institute for Learning and Memory.</p> <p>Tonegawa is the senior author of the study, which appears today in <em>Nature Neuroscience</em>. Chen Sun, an MIT graduate student, is the lead author of the paper. New York University graduate student Wannan Yang and Picower Institute technical associate Jared Martin are also authors of the paper.</p> <p><strong>Encoding abstraction</strong></p> <p>It is well-established that certain cells in the brain’s hippocampus are specialized to store memories of specific locations. Research in mice has shown that within the hippocampus, neurons called place cells fire when the animals are in a specific location, or even if they are dreaming about that location.</p> <p>In the new study, the MIT team wanted to investigate whether the hippocampus also stores representations of more abstract elements of a memory. That is, instead of firing whenever you enter a particular restaurant, such cells might encode “dessert,” no matter where you’re eating it.</p> <p>To test this hypothesis, the researchers measured activity in neurons of the CA1 region of the mouse hippocampus as the mice repeatedly ran a four-lap maze. At the end of every fourth lap, the mice were given a reward. As expected, the researchers found place cells that lit up when the mice reached certain points along the track. However, the researchers also found sets of cells that were active during one of the four laps, but not the others. About 30 percent of the neurons in CA1 appeared to be involved in creating this “event code.”</p> <p>“This gave us the initial inkling that besides a code for space, cells in the hippocampus also care about this discrete chunk of experience called lap 1, or this discrete chunk of experience called lap 2, or lap 3, or lap 4,” Sun says.</p> <p>To further explore this idea, the researchers trained mice to run a square maze on day 1 and then a circular maze on day 2, in which they also received a reward after every fourth lap. They found that the place cells changed their activity, reflecting the new environment. However, the same sets of lap-specific cells were activated during each of the four laps, regardless of the shape of the track. The lap-encoding cells’ activity also remained consistent when laps were randomly shortened or lengthened.</p> <p>“Even in the new spatial locations, cells still maintain their coding for the lap number, suggesting that cells that were coding for a square lap 1 have now been transferred to code for a circular lap 1,” Sun says.</p> <p>The researchers also showed that if they used optogenetics to inhibit sensory input from a part of the brain called the medial entorhinal cortex (MEC), lap-encoding did not occur. They are now investigating what kind of input the MEC region provides to help the hippocampus create memories consisting of chunks of an experience.</p> <p><strong>Two distinct codes</strong></p> <p>These findings suggest that, indeed, every time you eat dinner, similar memory cells are activated, no matter where or what you’re eating. The researchers theorize that the hippocampus contains “two mutually and independently manipulatable codes,” Sun says. One encodes continuous changes in location, time, and sensory input, while the other organizes an overall experience into smaller chunks that fit into known categories such as appetizer and dessert.</p> <p>“We believe that both types of hippocampal codes are useful, and both are important,” Tonegawa says. “If we want to remember all the details of what happened in a specific experience, moment-to-moment changes that occurred, then the continuous monitoring is effective. But on the other hand, when we have a longer experience, if you put it into chunks, and remember the abstract order of the abstract chunks, that’s more effective than monitoring this long process of continuous changes.”</p> <p>The new MIT results “significantly advance our knowledge about the function of the hippocampus,” says Gyorgy Buzsaki, a professor of neuroscience at New York University School of Medicine, who was not part of the research team.</p> <p>“These findings are significant because they are telling us that the hippocampus does a lot more than just ‘representing’ space or integrating paths into a continuous long journey,” Buzsaki says. “From these remarkable results Tonegawa and colleagues conclude that they discovered an ‘event code,’ dedicated to organizing experience by events, and that this code is independent of spatial and time representations, that is, jobs also attributed to the hippocampus.”</p> <p>Tonegawa and Sun believe that networks of cells that encode chunks of experiences may also be useful for a type of learning called transfer learning, which allows you to apply knowledge you already have to help you interpret new experiences or learn new things. Tonegawa’s lab is now working on trying to find cell populations that might encode these specific pieces of knowledge.</p> <p>The research was funded by the RIKEN Center for Brain Science, the Howard Hughes Medical Institute, and the JPB Foundation.</p> “When you encounter something new, there are some really new and notable stimuli, but you already know quite a bit about that particular experience, because it’s a similar kind of experience to what you have already had before,” says Susumu Tonegawa, a professor of biology and neuroscience at the RIKEN-MIT Laboratory of Neural Circuit Genetics at MIT’s Picower Institute for Learning and Memory.Image: MIT NewsResearch, Memory, Brain and cognitive sciences, Picower Institute, School of Science, Neuroscience MIT scientist helps build Covid-19 resource to address shortage of face masks http://news.mit.edu/2020/mit-scientist-jill-crittenden-helps-build-covid-19-resource-addressing-face-mask-shortage-0403 Jill Crittenden and colleagues in a new consortium provides guidance for health care workers on decontamination and reuse of N95 face masks. Fri, 03 Apr 2020 11:20:01 -0400 Sabbi Lall | McGovern Institute for Brain Research http://news.mit.edu/2020/mit-scientist-jill-crittenden-helps-build-covid-19-resource-addressing-face-mask-shortage-0403 <p>When the Covid-19 crisis hit the United States this March, MIT neuroscientist Jill Crittenden wanted to help. One of her greatest concerns was the shortage of face masks, which are a key weapon for health care providers, frontline service workers, and the public to protect against respiratory transmission of Covid-19. For those caring for Covid-19 patients, face masks that provide a near-100 percent seal are essential. These critical pieces of equipment, called N95 masks, are now scarce, and health-care workers are now faced with reusing potentially contaminated masks.</p> <p>To address this, Crittenden joined a team of 60 scientists and engineers, students, and clinicians drawn from universities and the private sector to synthesize the scientific literature about mask decontamination and create a set of best practices for bad times. The group has now unveiled a website, <a href="http://www.n95decon.org">N95decon.org</a>, which provides a summary of this critical information.</p> <p>“I first heard about the group from Larissa Little, a Harvard graduate student with John Doyle,” explains Crittenden, who is a research scientist in <a href="https://mcgovern.mit.edu/profile/ann-graybiel/">Ann Graybiel</a>'s lab at the <a href="http://mcgovern.mit.edu">McGovern Institute for Brain Research at MIT</a>. “The three of us began communicating because we are all also members of the Boston-based MGB Covid-19 Innovation Center, and we agreed that helping to assess the flood of information on N95 decontamination would be an important contribution.”</p> <p>The team members who came together over several weeks scoured hundreds of peer-reviewed publications and held continuous online meetings to review studies of decontamination methods that had been used to inactivate previous viral and bacterial pathogens, and to then assess the potential for these methods to neutralize the novel SARS-CoV-2 virus that causes Covid-19.</p> <p>“This group is absolutely amazing,” says Crittenden. “The Zoom meetings are very productive because it is all data- and solutions-driven. Everyone throws out ideas, what they know and what the literature source is, with the only goal being to get to a data-based consensus efficiently.”</p> <p><strong>Reliable resource</strong></p> <p>The goal of the consortium was to provide overwhelmed health officials, who don’t have the time to study the literature for themselves, reliable, pre-digested scientific information about the pros and cons of three decontamination methods that offer the best options should local shortages force a choice between decontamination and reuse, or going unmasked.</p> <p>The three methods involve (1) heat and humidity, (2) a specific wavelength of light called ultraviolet C (UVC), and (3) treatment with hydrogen peroxide vapors (HPV). The scientists did not endorse any one method, but instead sought to describe the circumstances under which each could inactivate the virus provided rigorous procedures were followed. Devices that rely on heat, for instance, could be used under specific temperature, humidity, and time parameters. With UVC devices — which emit a particular wavelength and energy level of light — considerations involve making sure masks are properly oriented to the light so the entire surface is bathed in sufficient energy. The HPV method has the potential advantage of decontaminating masks in volume, as the U.S. Food and Drug Administration, acting in this emergency, has certified certain vendors to offer hydrogen peroxide vapor treatments on a large scale. In addition to giving health officials the scientific information to assess the methods best suited to their circumstances, <a href="http://www.n95decon.org">N95decon.org</a> points decision-makers to sources of reliable and detailed how-to information provided by other organizations, institutions, and commercial services.</p> <p>“While there is no perfect method for decontamination of N95 masks, it is crucial that decision-makers and users have as much information as possible about the strengths and weaknesses of various approaches,” says Manu Prakash, an associate professor of bioengineering at Stanford University, who helped coordinate this ad hoc, volunteer undertaking. “Manufacturers currently do not recommend N95 mask reuse. We aim to provide information and evidence in this critical time to help those on the front lines of this crisis make risk-management decisions given the specific conditions and limitations they face.”</p> <p>The researchers stressed that decontamination does not solve the N95 shortage, and expressed the hope that new masks should be made available in large numbers as soon as possible so that health-care workers and first providers could be issued fresh protective gear whenever needed as specified by the non-emergency guidelines set by the U.S. Centers for Disease Control and Prevention.</p> <p><strong>Forward thinking</strong></p> <p>Meanwhile, these ad hoc volunteers have pledged to continue working together to update the <a href="http://www.n95decon.org">N95decon.org</a> website as new information becomes available, and to coordinate their efforts to do research to plug the gaps in current knowledge to avoid duplication of effort.</p> <p>“We are, at heart, a group of people that want to help better equip hospitals and health-care personnel in this time of crisis,” says Brian Fleischer, a surgeon at the University of Chicago Medical Center and a member of the N95DECON consortium. “As a health care provider, many of my colleagues across the country have expressed concern with a lack of quality information in this ever-evolving landscape. I have learned a great deal from this team and I look forward to our continued collaboration to positively affect change.”</p> <p>Crittenden is hopeful that the new website will help health-care workers make informed decisions about the safest methods available for decontamination and reuse of N95 masks. “I know physicians personally who are very grateful that teams of scientists are doing the in-depth data analysis so that they can feel confident in what is best for their own health,” she says.</p> <p>Members of the team come from institutions including the University of California at Berkeley, the University of Chicago, Stanford University, Georgetown University, Harvard University, Seattle University, University of Utah, MIT,&nbsp;the University of Michigan, and from Consolidated Sterilizers and X, the Moonshot Factory.</p> McGovern research scientist Jill Crittenden helped the N95DECON consortium assess face mask decontamination protocols so health-care workers can easily access them for Covid-19 protection.Photo: Caitlin CunninghamMcGovern Institute, School of Science, Brain and cognitive sciences, Covid-19, Pandemic, Health sciences and technology, Health care, Medical devices How dopamine drives brain activity http://news.mit.edu/2020/dopamine-brain-activity-mri-0401 A specialized MRI sensor reveals the neurotransmitter’s influence on neural activity throughout the brain. Wed, 01 Apr 2020 10:59:59 -0400 Anne Trafton | MIT News Office http://news.mit.edu/2020/dopamine-brain-activity-mri-0401 <p>Using a specialized magnetic resonance imaging (MRI) sensor, MIT neuroscientists have discovered how dopamine released deep within the brain influences both nearby and distant brain regions.</p> <p>Dopamine plays many roles in the brain, most notably related to movement, motivation, and reinforcement of behavior. However, until now it has been difficult to study precisely how a flood of dopamine affects neural activity throughout the brain. Using their new technique, the MIT team found that dopamine appears to exert significant effects in two regions of the brain’s cortex, including the motor cortex.</p> <p>“There has been a lot of work on the immediate cellular consequences of dopamine release, but here what we’re looking at are the consequences of what dopamine is doing on a more brain-wide level,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering. Jasanoff is also an associate member of MIT’s McGovern Institute for Brain Research and the senior author of the study.</p> <p>The MIT team found that in addition to the motor cortex, the remote brain area most affected by dopamine is the insular cortex. This region is critical for many cognitive functions related to perception of the body’s internal states, including physical and emotional states.</p> <p>MIT postdoc Nan Li is the lead author of the study, which appears today in <em>Nature</em>.</p> <p><strong>Tracking dopamine</strong></p> <p>Like other neurotransmitters, dopamine helps neurons to communicate with each other over short distances. Dopamine holds particular interest for neuroscientists because of its role in motivation, addiction, and several neurodegenerative disorders, including Parkinson’s disease. Most of the brain’s dopamine is produced in the midbrain by neurons that connect to the striatum, where the dopamine is released.</p> <p>For many years, Jasanoff’s lab has been developing tools to study how molecular phenomena such as neurotransmitter release affect brain-wide functions. At the molecular scale, existing techniques can reveal how dopamine affects individual cells, and at the scale of the entire brain, functional magnetic resonance imaging (fMRI) can reveal how active a particular brain region is. However, it has been difficult for neuroscientists to determine how single-cell activity and brain-wide function are linked.</p> <p>“There have been very few brain-wide studies of dopaminergic function or really any neurochemical function, in large part because the tools aren’t there,” Jasanoff says. “We’re trying to fill in the gaps.”</p> <p>About 10 years ago, his lab developed MRI sensors that consist of magnetic proteins that can bind to dopamine. When this binding occurs, the sensors’ magnetic interactions with surrounding tissue weaken, dimming the tissue’s MRI signal. This allows researchers to continuously monitor dopamine levels in a specific part of the brain.</p> <p>In their new study, Li and Jasanoff set out to analyze how dopamine released in the striatum of rats influences neural function both locally and in other brain regions. First, they injected their dopamine sensors into the striatum, which is located deep within the brain and plays an important role in controlling movement. Then they electrically stimulated a part of the brain called the lateral hypothalamus, which is a common experimental technique for rewarding behavior and inducing the brain to produce dopamine.</p> <p>Then, the researchers used their dopamine sensor to measure dopamine levels throughout the striatum. They also performed traditional fMRI to measure neural activity in each part of the striatum. To their surprise, they found that high dopamine concentrations did not make neurons more active. However, higher dopamine levels did make the neurons remain active for a longer period of time.</p> <p>“When dopamine was released, there was a longer duration of activity, suggesting a longer response to the reward,” Jasanoff says. “That may have something to do with how dopamine promotes learning, which is one of its key functions.”</p> <p><strong>Long-range effects</strong></p> <p>After analyzing dopamine release in the striatum, the researchers set out to determine this dopamine might affect more distant locations in the brain. To do that, they performed traditional fMRI imaging on the brain while also mapping dopamine release in the striatum. “By combining these techniques we could probe these phenomena in a way that hasn’t been done before,” Jasanoff says.</p> <p>The regions that showed the biggest surges in activity in response to dopamine were the motor cortex and the insular cortex. If confirmed in additional studies, the findings could help researchers understand the effects of dopamine in the human brain, including its roles in addiction and learning.</p> <p>“Our results could lead to biomarkers that could be seen in fMRI data, and these correlates of dopaminergic function could be useful for analyzing animal and human fMRI,” Jasanoff says.</p> <p>The research was funded by the National Institutes of Health and a Stanley Fahn Research Fellowship from the Parkinson’s Disease Foundation.</p> MIT biological engineers have created a specialized sensor that allows them to track dopamine in the brain using magnetic resonance imaging (MRI), as shown in the bottom row. Images in the top row show overall brain activity, as measured by functional MRI.Image: Courtesy of the researchersResearch, Brain and cognitive sciences, Learning, Memory, Magnetic resonance imaging (MRI), Neuroscience, McGovern Institute, School of Science, School of Engineering, National Institutes of Health (NIH) Engineers 3D print soft, rubbery brain implants http://news.mit.edu/2020/engineers-3d-print-brain-implants-0330 Technique may enable speedy, on-demand design of softer, safer neural devices. Mon, 30 Mar 2020 08:51:36 -0400 Jennifer Chu | MIT News Office http://news.mit.edu/2020/engineers-3d-print-brain-implants-0330 <p>The brain is one of our most vulnerable organs, as soft as the softest tofu. Brain implants, on the other hand, are typically made from metal and other rigid materials that over time can cause inflammation and the buildup of scar tissue.</p> <p>MIT engineers are working on developing soft, flexible neural implants that can gently conform to the brain’s contours and monitor activity over longer periods, without aggravating surrounding tissue. Such flexible electronics could be softer alternatives to existing metal-based electrodes designed to monitor brain activity, and may also be useful in brain implants that stimulate neural regions to ease symptoms of epilepsy, Parkinson’s disease, and severe depression.</p> <p>Led by Xuanhe Zhao, a professor of mechanical engineering and of civil and environmental engineering, the research team has now developed a way to 3D print neural probes and other electronic devices that are as soft and flexible as rubber.</p> <p>The devices are made from a type of polymer, or soft plastic, that is electrically conductive. The team transformed this normally liquid-like conducting polymer solution into a substance more like viscous toothpaste — which they could then feed through a conventional 3D printer to make stable, electrically conductive patterns.</p> <p>The team printed several soft electronic devices, including a small, rubbery electrode, which they implanted in the brain of a mouse. As the mouse moved freely in a controlled environment, the neural probe was able to pick up on the activity from a single neuron. Monitoring this activity can give scientists a higher-resolution picture of the brain’s activity, and can help in tailoring therapies and long-term brain implants for a variety of neurological disorders.</p> <p>“We hope by demonstrating this proof of concept, people can use this technology to make different devices, quickly,” says Hyunwoo Yuk, a graduate student in Zhao’s group at MIT. “They can change the design, run the printing code, and generate a new design in 30 minutes. Hopefully this will streamline the development of neural interfaces, fully made of soft materials.”</p> <p>Yuk and Zhao have published their results today in the journal <em>Nature Communications</em>. Their co-authors include Baoyang Lu and Jingkun Xu of the Jiangxi Science and Technology Normal University, along with Shen Lin and Jianhong Luo of Zheijiang University’s School of Medicine.</p> <p><img alt="" src="/sites/mit.edu.newsoffice/files/images/printing-electrodes-1_1.gif" /></p> <p><em><span style="font-size: 10px;">The team printed several soft electronic devices, including a small, rubbery electrode.</span></em></p> <p><strong>From soap water to toothpaste</strong></p> <p>Conducting polymers are a class of materials that scientists have eagerly explored in recent years for their unique combination of plastic-like flexibility and metal-like electrical conductivity. Conducting polymers are used commercially as antistatic coatings, as they can effectively carry away any electrostatic charges that build up on electronics and other static-prone surfaces.</p> <p>“These polymer solutions are easy to spray on electrical devices like touchscreens,” Yuk says. “But the liquid form is mostly for homogenous coatings, and it’s difficult to use this for any two-dimensional, high-resolution patterning. In 3D, it’s impossible.”</p> <p>Yuk and his colleagues reasoned that if they could develop a printable conducting polymer, they could then use the material to print a host of soft, intricately patterned electronic devices, such as flexible circuits, and single-neuron electrodes.</p> <p>In their new study, the team report modifying poly (3,4-ethylenedioxythiophene) polystyrene sulfonate, or PEDOT:PSS, a conducting polymer typically supplied in the form of an inky, dark-blue liquid. The liquid is a mixture of water and nanofibers of PEDOT:PSS. The liquid gets its conductivity from these nanofibers, which, when they come in contact, act as a sort of tunnel through which any electrical charge can flow.</p> <p>If the researchers were to feed this polymer into a 3D printer in its liquid form, it would simply bleed across the underlying surface. So the team looked for a way to thicken the polymer while retaining the material’s inherent electrical conductivity.</p> <p>They first freeze-dried the material, removing the liquid and leaving behind a dry matrix, or sponge, of nanofibers. Left alone, these nanofibers would become brittle and crack. So the researchers then remixed the nanofibers with a solution of water and an organic solvent, which they had previously developed, to form a hydrogel — a water-based, rubbery material embedded with nanofibers.</p> <p>They made hydrogels with various concentrations of nanofibers, and found that a range between 5 to 8 percent by weight of nanofibers produced a toothpaste-like material that was both electrically conductive and suitable for feeding into a 3D printer.</p> <p>“Initially, it’s like soap water,” Zhao says. “We condense the nanofibers and make it viscous like toothpaste, so we can squeeze it out as a thick, printable liquid.”</p> <p><strong>Implants on demand</strong></p> <p>The researchers fed the new conducting polymer into a conventional 3D printer and found they could produce intricate patterns that remained stable and electrically conductive.</p> <p>As a proof of concept, they printed a small, rubbery electrode, about the size of a piece of confetti. The electrode consists of a layer of flexible, transparent polymer, over which they then printed the conducting polymer, in thin, parallel lines that converged at a tip, measuring about 10 microns wide — small enough to pick up electrical signals from a single neuron.</p> <p><img alt="" src="/sites/mit.edu.newsoffice/files/images/printing-electrodes-2_0.gif" style="width: 500px; height: 281px;" /></p> <p><em><span style="font-size:10px;"><span style="caret-color: rgb(0, 0, 0); color: rgb(0, 0, 0); font-family: Calibri, sans-serif; text-size-adjust: auto;">MIT researchers print flexible circuits (shown here) and other soft electrical devices using new 3-D-printing technique and conducting polymer ink.&nbsp;&nbsp;</span></span></em></p> <p>The team implanted the electrode in the brain of a mouse and found it could pick up electrical signals from a single neuron.</p> <p>“Traditionally, electrodes are rigid metal wires, and once there are vibrations, these metal electrodes could damage tissue,” Zhao says. “We’ve shown now that you could insert a gel probe instead of a needle.”</p> <p>In principle, such soft, hydrogel-based electrodes might even be more sensitive than conventional metal electrodes. That’s because most metal electrodes conduct electricity in the form of electrons, whereas neurons in the brain produce electrical signals in the form of ions. Any ionic current produced by the brain needs to be converted into an electrical signal that a metal electrode can register — a conversion that can result in some part of the signal getting lost in translation. What’s more, ions can only interact with a metal electrode at its surface, which can limit the concentration of ions that the electrode can detect at any given time.</p> <p>In contrast, the team’s soft electrode is made from electron-conducting nanofibers, embedded in a hydrogel — a water-based material that ions can freely pass through.</p> <p>“The beauty of a conducting polymer hydrogel is, on top of its soft mechanical properties, it is made of hydrogel, which is ionically conductive, and also a porous sponge of nanofibers, which the ions can flow in and out of,” Lu says. “Because the electrode’s whole volume is active, its sensitivity is enhanced.”</p> <p>In addition to the neural probe, the team also fabricated a multielectrode array — a small, Post-it-sized square of plastic, printed with very thin electrodes, over which the researchers also printed a round plastic well. Neuroscientists typically fill the wells of such arrays with cultured&nbsp; neurons, and can study their activity through the signals that are detected by the device’s underlying electrodes.</p> <p>For this demonstration, the group showed they could replicate the complex designs of such arrays using 3D printing, versus traditional lithography techniques, which</p> <p>involve carefully etching metals, such as gold, into prescribed patterns, or masks — a process that can take days to complete a single device.</p> <p>“We make the same geometry and resolution of this device using 3D printing, in less than an hour,” Yuk says. “This process may replace or supplement lithography techniques, as a simpler and cheaper way to make a variety of neurological devices, on demand.”</p> MIT researchers have 3-D-printed soft electronically active polymers into a number of devices, including a pliable neural electrode, and (shown here) a flexible circuit.Images: courtesy of the researchers3-D printing, Biological engineering, Civil and environmental engineering, electronics, Mechanical engineering, Research, School of Engineering, Neuroscience, Brain and cognitive sciences Ed Boyden wins prestigious Wilhelm Exner Medal http://news.mit.edu/2020/ed-boyden-wins-prestigious-eilhelm-exner-medal-0318 Entrepreneurial science award recognizes scientists whose work opens up “new dimensions of economic progress.&quot; Wed, 18 Mar 2020 15:30:01 -0400 Sabbi Lall | McGovern Institute for Brain Research http://news.mit.edu/2020/ed-boyden-wins-prestigious-eilhelm-exner-medal-0318 <p>The Austrian Association of Entrepreneurs has announced that <a href="https://mcgovern.mit.edu/profile/ed-boyden/" target="_blank">Edward S. Boyden</a>, the Y. Eva Tan Professor in Neurotechnology at MIT, has been awarded the 2020 <a href="https://www.wilhelmexner.org/en/" target="_blank">Wilhelm Exner Medal</a>.</p> <p>Named after Austrian businessman Wilhelm Exner, the medal has been awarded annually since 1921 to scientists, inventors, and designers who are “promoting the economy directly or indirectly in an outstanding manner." Past honorees include 22 Nobel laureates.</p> <p>“It’s a great honor to receive this award, which recognizes not only the basic science impact of our group’s work, but the impact of the work in the industrial and startup worlds,” says Boyden, who is a professor of biological engineering and of brain and cognitive sciences at MIT.</p> <p>Boyden is a leading scientist whose work is widely used in industry, both in his own startup companies and in existing companies. Boyden is also a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.</p> <p>"I am so thrilled that Ed has received this honor," says Robert Desimone, director of the McGovern Institute. "Ed’s work has transformed neuroscience, through optogenetics, expansion microscopy, and other findings that are pushing biotechnology forward too."</p> <p>He is interested in understanding the brain as a computational system, and builds and applies tools for the analysis of neural circuit structure and dynamics, in behavioral and disease contexts. He played a critical role in the development of <a href="https://mcgovern.mit.edu/2012/11/26/optogenetics-a-light-switch-for-neurons/">optogenetics</a>, a revolutionary tool where the activity of neurons can be controlled using light. Boyden also led the team that invented <a href="https://mcgovern.mit.edu/2015/01/15/mit-team-enlarges-brain-samples-making-them-easier-to-image/">expansion microscopy</a>, which gives an unprecedented view of the nanoscale structures of cells, even in the absence of special super-resolution microscopy equipment. Exner Medal laureates include notable luminaries of science, including Robert Langer of MIT. In addition, Boyden has founded a number of companies based on his inventions in the busy biotech hub of Kendall Square in Cambridge, Massachusetts. These include a startup that is seeking to apply expansion microscopy to medical problems.</p> <p>Boyden will deliver his prize lecture at the Exner symposium in November 2020, during which economists and scientists come together to hear about the winner’s research.</p> MIT neuroscientist Ed Boyden has received the 2020 Wilhelm Exner Medal.Photo: Justin KnightMcGovern Institute, Biological engineering, Brain and cognitive sciences, Media Lab, Koch Institute, Innovation and Entrepreneurship (I&E), Awards, honors and fellowships, Faculty, School of Engineering, School of Science, School of Architecture and Planning School of Science announces 2020 Infinite Mile Awards http://news.mit.edu/2020/infinite-mile-awards-presented-school-science-staff-0316 Ten staff members recognized for dedication to School of Science and to MIT. Mon, 16 Mar 2020 13:45:01 -0400 School of Science http://news.mit.edu/2020/infinite-mile-awards-presented-school-science-staff-0316 <p>The MIT <a href="https://science.mit.edu/">School of Science</a> has announced the 2020 winners of the Infinite Mile Award. Selected from a pool of staff members nominated by their colleagues for going above and beyond in their roles within the MIT community, these employees represent some of the most dedicated members to the Institute.</p> <p>The 2020 Infinite Mile Award winners in the School of Science are:</p> <p>Margaret Cabral, an administrative assistant in the <a href="https://biology.mit.edu/">Department of Biology</a>, nominated by Helene Kelsey;</p> <p>Rachel Donahue, the director of strategic scientific development for the <a href="https://bcs.mit.edu/">Department of Brain and Cognitive Sciences</a> and the Quest for Intelligence, nominated by professors Jim DiCarlo and Nick Roy;</p> <p>Slava Gerovitch, a lecturer in the <a href="https://math.mit.edu/">Department of Mathematics</a>, nominated by professors Pavel Etingof, David Jerison, JuLee Kim, and Ankur Moitra;</p> <p>Taylor Johns, a technical associate in the <a href="https://picower.mit.edu/">Picower Institute for Learning and Memory</a>, nominated by Professor Mriganka Sur and Grayson Sipe;</p> <p>Megan Jordan, the academic administrator in the <a href="https://eapsweb.mit.edu/">Department of Earth, Atmospheric and Planetary Sciences</a>, nominated by professors J. Taylor Perron, Andrew Babbin, Richard Binzel, Timothy Grove, and Paul O’Gorman, and by Karen Fosher;</p> <p>Renée LeBlanc, the senior financial officer in the <a href="https://picower.mit.edu/">Picower Institute for Learning and Memory</a>, nominated by Professor Li-Huei Tsai, Professor J. Troy Littleton, William Lawson, Lauren Anderson, Arlene Heywood-Dortch, Katherine Olson, Abby Reynolds, and Arek Hamalian;<br /> <br /> Aidan MacDonagh, a technical instructor in the <a href="http://physics.mit.edu/">Department of Physics</a>, nominated by Peter Dourmashkin, Professor Deepto Chakrabarty, Professor Robert Redwine, Professor Joseph Formaggio, and Michelle Tomasik;</p> <p>Avi Shporer, a research scientist in the <a href="https://space.mit.edu/">MIT Kavli Institute for Astrophysics and Space Research</a>, nominated by George Ricker, Maximilian Günther, and Tansu Daylan;</p> <p>Rebecca Teixeira Drake, a senior administrative assistant in the <a href="https://chemistry.mit.edu/">Department of Chemistry</a>, nominated by Professor Troy Van Voorhis and Jennifer Weisman; and</p> <p>Emily Wensberg, an administrative assistant in the <a href="https://chemistry.mit.edu/">Department of Chemistry</a>, nominated by Professor Troy Van Voorhis and Richard Wilk.</p> <p>They will be joining the previously-announced <a href="http://news.mit.edu/2020/school-science-recognizes-members-infinite-kilometer-awards-0103">2020 Infinite Kilometer Award winners</a> from the School of Science at a celebratory reception in their honor to be held later this spring, in addition to receiving a monetary award.</p> School of Science, Biology, Brain and cognitive sciences, Quest for Intelligence, Mathematics, Picower Institute, EAPS, Physics, Kavli Institute, Chemistry, Staff, Awards, honors and fellowships 3 Questions: Marion Boulicault and Milo Phillips-Brown on ethics in a technical curriculum http://news.mit.edu/2020/integrating-ethics-technical-curriculum-0311 Philosophers are part of a team working on transforming technology ethics education at MIT. Wed, 11 Mar 2020 14:00:01 -0400 School of Humanities, Arts, and Social Sciences http://news.mit.edu/2020/integrating-ethics-technical-curriculum-0311 <p><em>Marion Boulicault and Milo Phillips-Brown are part of a team working on transforming technology ethics education at MIT.</em></p> <p><em>Boulicault, a PhD candidate in MIT Philosophy, is a neuroethics fellow with the National Science Foundation's Center for Neurotechnology, for which she has organized ethics roundtables and taught ethics workshops in partnership with Momentum and MOSTEC, MIT programs for undergraduate and high school students. She also co-leads a National Institutes of Health-funded project examining the ethical implications of the use of neurotechnology for treating psychiatric disorders, and is a teaching fellow at Harvard University’s Embedded EthiCS program. Her dissertation on infertility measurement investigates technology ethics through the lens of feminist philosophy. For the past two years, she’s been a member of Harvard’s GenderSci Lab, an interdisciplinary research group engaged in generating feminist concepts, methods, and theories for scientific research on sex and gender.</em></p> <p><em>Phillips-Brown is a postdoc in the ethics of technology in MIT Philosophy (one half of the Department of Linguistics and Philosophy), a research fellow in digital ethics and governance at the Jain Family Institute, and a member of the Advisory Board on the Social and Ethical Responsibilities of Computing. He has collaborated with professors from computer science, brain and cognitive science, and across the School of Engineering to integrate ethics into engineering classes. He also teaches 24.131 (Ethics of Technology) in MIT Philosophy.</em><br /> &nbsp;<br /> <strong>Q:</strong> It seems that barely a day goes by without some controversy about technology in the news — for example, Facebook’s controversial policies toward political advertising on its platform. Do you think technology ethics education can help us understand and address these controversies? And if so, how?</p> <p><strong>A: </strong>Technology ethics education can definitely help us to&nbsp;understand and address these controversies. But, we believe that to do so most effectively, a new approach is needed.<br /> <br /> If you open an engineering textbook and flip to the ethics section — if there is one — you’ll likely see historical case studies of technologies gone awry (the space shuttle Challenger disaster, say) and bite-sized versions of moral theories (excerpts from philosophers John Stuart Mill on utilitarianism, or Immanuel Kant on why one should follow rules, for example). This has been the traditional approach to technology ethics education.</p> <p>The problem with the traditional approach is that it’s too far removed from what engineers and technologists do when they actually make things. You can, of course, learn from case studies of people’s mistakes, but students and instructors with whom we’ve worked say they feel alienated from traditional case studies and don’t always understand what these studies have to do with their own work. And the abstract realm of moral theory is, well, abstract!<br /> <br /> It’s not always clear how to operationalize these theories in practice. And we’re philosophers: we don’t mean this as a knock on moral theory. It’s just that Mill and Kant and most people who are in the business of doing moral theory weren’t necessarily thinking about the intersection of moral theory with technological change.</p> <p>The alternative approach we’ve been piloting across MIT is teaching ethics as a set of skills (or what Aristotle would call techné). If we’re going to make a difference in whether our students make things ethically and responsibly, they have to know how to do that. They need ethical skills that they can apply to their own work.<br /> <br /> This spans from skills for how to think about the seemingly mundane decisions they make on a daily basis in the lab, or in a meeting at their startup, to decisions about whether to accept industry funding or how to speak about their work in public, to fundamental decisions about whether a technology should be made in the first place. All of these decisions have ethical dimensions, and we want to teach students the skills to navigate them now and throughout their future careers.</p> <p><strong>Q:</strong> What does skills-based ethics pedagogy look like at MIT?<br /> &nbsp;<br /> <strong>A:</strong> In 2018, we, together with Abby Jaques and Jim Magarian, began piloting a skills-based approach as part of the New Engineering Education Transformation (NEET). NEET is an interdisciplinary School of Engineering initiative that’s oriented towards competency- and skill-based learning. Over the course of a year, NEET students build a technology — like an autonomous drone, or a biological “microchip” that simulates the human gut — and during an in-class workshop, we provide a structured, step-by-step guide for students on how to recognize and think through some of the complex ethical and political dimensions of their technology.</p> <p>We’ve also been working with professors in EECS&nbsp;[the Department of Electrical Engineering and Computer Science] to add ethics questions to engineering problem sets, with the expectation the students will be grappling with ethical decision-making as they train to become engineers.</p> <p>We don’t have all the answers. This is still very much an exploratory phase to figure out what works and what doesn’t with this new approach. One thing we’ve found so far is that students are more inclined to engage with ethical thinking when their professors signal that they care about ethical engineering. For example, professors can speak to why they care about ethics at the beginning of our in-class workshops. Putting ethics questions alongside technical material in problem sets is also effective because it signals that ethical issues are on par with, and inextricable from, technical ones.</p> <p><strong>Q:</strong> How would you like to see technology ethics integrated across MIT?<br /> <br /> <strong>A:</strong> Ultimately, we would like to see MIT take a fully immersive approach to ethics education. By that, we mean ethical reasoning skills should be taught, valorized, and rewarded at every stage and in every dimension of undergraduate and graduate education. The result, we hope, is that students — and the Institute at large — would come to see technology, ethics, and politics as inescapably intertwined. That’s in contrast to a model where the engineer makes something, then thinks “let’s check for ethical issues.” Ethics and politics are implicated every step of the way when technology is created.</p> <p>The MIT Schwarzman College of Computing is a great opportunity to exemplify this model. In a recent <a href="http://news.mit.edu/2019/3q-dan-huttenlocher-formation-mit-schwarzman-college-computing-1126" target="_self">article</a> in MIT News, the college’s dean, Dan Huttenlocher, wrote that “no other academic institution is taking on the scale and scope of change that we are pursuing at MIT.” The college has named David Kaiser, the Germeshausen Professor of the History of Science, and Julie Shah, an associate professor in the Department of Aeronautics and Astronautics, as associate deans for the program in Social and Ethical Responsibilities of Computing, so there is an opportunity for the “scale and scope” and also the direction of this monumental change to encompass social justice.</p> <p>Teaching ethics as a skill is a key part of this, as is having complementary classes in the School of Humanities, Arts, and Social Sciences that encourage students to see the ethics, politics, and social nature of technology through the lenses of various disciplines. For example, Milo has co-taught an MIT philosophy class, Ethics of Technology, that addresses moral and political theory in relation to questions about technology that are making headlines right now. In this class, students read an article about China’s surveillance state alongside Foucault on the Panopticon, or a white paper on best practices for accessible data visualization with a recent academic paper in the theory of disability.</p> <p>We are also currently partnering with Kate Trimble, the associate dean for public service, to integrate ethical reasoning into summer experiential education programs, such as UROP [Undergraduate Research Opportunities Program] and MISTI [MIT International Science and Technology Initiatives Program]. In doing so, we are working towards building an interdisciplinary, multimodal, and fully immersive approach to ethics education at MIT, one which provides students with opportunities for learning and practicing ethical reasoning skills across all of their experiences at the Institute.</p> <h5><em>Interview prepared by MIT SHASS Communications<br /> Editorial Team: Maria Iacobo and Emily Hiestand</em></h5> Milo Phillips-Brown (left) and Marion Boulicault are part of a team working on transforming technology ethics education at MIT.Photo: Jon SachsSchool of Humanities Arts and Social Sciences, Linguistics and Philosophy, School of Engineering, School of Science, Brain and cognitive sciences, Electrical engineering and computer science (EECS), MIT Schwarzman College of Computing, Technology and society, Ethics, 3 Questions, Aeronautical and astronautical engineering How the brain encodes landmarks that help us navigate http://news.mit.edu/2020/brain-encodes-landmarks-navigate-0310 Neuroscientists discover how a key brain region combines visual and spatial information to help us find our way. Tue, 10 Mar 2020 00:00:00 -0400 Anne Trafton | MIT News Office http://news.mit.edu/2020/brain-encodes-landmarks-navigate-0310 <p>When we move through the streets of our neighborhood, we often use familiar landmarks to help us navigate. And as we think to ourselves, “OK, now make a left at the coffee shop,” a part of the brain called the retrosplenial cortex (RSC) lights up.</p> <p>While many studies have linked this brain region with landmark-based navigation, exactly how it helps us find our way is not well-understood. A new study from MIT neuroscientists now reveals how neurons in the RSC use both visual and spatial information to encode specific landmarks.</p> <p>“There’s a synthesis of some of these signals — visual inputs and body motion — to represent concepts like landmarks,” says Mark Harnett, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “What we went after in this study is the neuron-level and population-level representation of these different aspects of spatial navigation.”</p> <p>In a study of mice, the researchers found that this brain region creates a “landmark code” by combining visual information about the surrounding environment with spatial feedback of the mice’s own position along a track. Integrating these two sources of information allowed the mice to learn where to find a reward, based on landmarks that they saw.</p> <p>“We believe that this code that we found, which is really locked to the landmarks, and also gives the animals a way to discriminate between landmarks, contributes to the animals’ ability to use those landmarks to find rewards,” says Lukas Fischer, an MIT postdoc and the lead author of the study.</p> <p>Harnett is the senior author of the study, which appears today in the journal <em>eLife</em>. Other authors are graduate student Raul Mojica Soto-Albors and recent MIT graduate Friederike Buck.</p> <p><strong>Encoding landmarks</strong></p> <p>Previous studies have found that people with damage to the RSC have trouble finding their way from one place to another, even though they can still recognize their surroundings. The RSC is also one of the first areas affected in Alzheimer’s patients, who often have trouble navigating.</p> <p>The RSC is wedged between the primary visual cortex and the motor cortex, and it receives input from both of those areas. It also appears to be involved in combining two types of representations of space — allocentric, meaning the relationship of objects to each other, and egocentric, meaning the relationship of objects to the viewer.</p> <p>“The evidence suggests that RSC is really a place where you have a fusion of these different frames of reference,” Harnett says. “Things look different when I move around in the room, but that’s because my vantage point has changed. They’re not changing with respect to one another.”</p> <p>In this study, the MIT team set out to analyze the behavior of individual RSC neurons in mice, including how they integrate multiple inputs that help with navigation. To do that, they created a virtual reality environment for the mice by allowing them to run on a treadmill while they watch a video screen that makes it appear they are running along a track. The speed of the video is determined by how fast the mice run.</p> <p>At specific points along the track, landmarks appear, signaling that there’s a reward available a certain distance beyond the landmark. The mice had to learn to distinguish between two different landmarks, and to learn how far beyond each one they had to run to get the reward.</p> <p>Once the mice learned the task, the researchers recorded neural activity in the RSC as the animals ran along the virtual track. They were able to record from a few hundred neurons at a time, and found that most of them anchored their activity to a specific aspect of the task.</p> <p>There were three primary anchoring points: the beginning of the trial, the landmark, and the reward point. The majority of the neurons were anchored to the landmarks, meaning that their activity would consistently peak at a specific point relative to the landmark, say 50 centimeters before it or 20 centimeters after it.</p> <p>Most of those neurons responded to both of the landmarks, but a small subset responded to only one or the other. The researchers hypothesize that those strongly selective neurons help the mice to distinguish between the landmarks and run the correct distance to get the reward.</p> <p>When the researchers used optogenetics (a tool that can turn off neuron activity) to block activity in the RSC, the mice’s performance on the task became much worse.</p> <p><strong>Combining inputs</strong></p> <p>The researchers also did an experiment in which the mice could choose to run or not while the video played at a constant speed, unrelated to the mice’s movement. The mice could still see the landmarks, but the location of the landmarks was no longer linked to a reward or to the animals’ own behavior. In that situation, RSC neurons did respond to the landmarks, but not as strongly as they did when the mice were using them for navigation.</p> <p>Further experiments allowed the researchers to tease out just how much neuron activation is produced by visual input (seeing the landmarks) and by feedback on the mouse’s own movement. However, simply adding those two numbers yielded totals much lower than the neuron activity seen when the mice were actively navigating the track.</p> <p>“We believe that is evidence for a mechanism of nonlinear integration of these inputs, where they get combined in a way that creates a larger response than what you would get if you just added up those two inputs in a linear fashion,” Fischer says.</p> <p>The researchers now plan to analyze data that they have already collected on how neuron activity evolves over time as the mice learn the task. They also hope to perform further experiments in which they could try to separately measure visual and spatial inputs into different locations within RSC neurons.</p> <p>The research was funded by the National Institutes of Health, the McGovern Institute, the NEC Corporation Fund for Research in Computers and Communications at MIT, and the Klingenstein-Simons Fellowship in Neuroscience.</p> MIT neuroscientists have identified a “landmark code” that helps the brain navigate our surroundings.Image: Christine Daniloff, MITResearch, Brain and cognitive sciences, McGovern Institute, School of Science, Neuroscience, National Institutes of Health (NIH) A new model of vision http://news.mit.edu/2020/computer-model-brain-vision-0304 Computer model of face processing could reveal how the brain produces richly detailed visual representations so quickly. Wed, 04 Mar 2020 14:00:00 -0500 Anne Trafton | MIT News Office http://news.mit.edu/2020/computer-model-brain-vision-0304 <p>When we open our eyes, we immediately see our surroundings in great detail. How the brain is able to form these richly detailed representations of the world so quickly is one of the biggest unsolved puzzles in the study of vision.</p> <p>Scientists who study the brain have tried to replicate this phenomenon using computer models of vision, but so far, leading models only perform much simpler tasks such as picking out an object or a face against a cluttered background. Now, a team led by MIT cognitive scientists has produced a computer model that captures the human visual system’s ability to quickly generate a detailed scene description from an image, and offers some insight into how the brain achieves this.</p> <p>“What we were trying to do in this work is to explain how perception can be so much richer than just attaching semantic labels on parts of an image, and to explore the question of how do we see all of the physical world,” says Josh Tenenbaum, a professor of computational cognitive science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM).</p> <p>The new model posits that when the brain receives visual input, it quickly performs a series of computations that reverse the steps that a computer graphics program would use to generate a 2D representation of a face or other object. This type of model, known as efficient inverse graphics (EIG), also correlates well with electrical recordings from face-selective regions in the brains of nonhuman primates, suggesting that the primate visual system may be organized in much the same way as the computer model, the researchers say.</p> <p>Ilker Yildirim, a former MIT postdoc who is now an assistant professor of psychology at Yale University, is the lead author of the paper, which appears today in <em>Science Advances</em>. Tenenbaum and Winrich Freiwald, a professor of neurosciences and behavior at Rockefeller University, are the senior authors of the study. Mario Belledonne, a graduate student at Yale, is also an author.</p> <p><strong>Inverse graphics</strong></p> <p>Decades of research on the brain’s visual system has studied, in great detail, how light input onto the retina is transformed into cohesive scenes. This understanding has helped artificial intelligence researchers develop computer models that can replicate aspects of this system, such as recognizing faces or other objects.</p> <p>“Vision is the functional aspect of the brain that we understand the best, in humans and other animals,” Tenenbaum says. “And computer vision is one of the most successful areas of AI at this point. We take for granted that machines can now look at pictures and recognize faces very well, and detect other kinds of objects.”</p> <p>However, even these sophisticated artificial intelligence systems don’t come close to what the human visual system can do, Yildirim says.</p> <p>“Our brains don’t just detect that there’s an object over there, or recognize and put a label on something,” he says. “We see all of the shapes, the geometry, the surfaces, the textures. We see a very rich world.”</p> <p>More than a century ago, the physician, physicist, and philosopher Hermann von Helmholtz theorized that the brain creates these rich representations by reversing the process of image formation. He hypothesized that the visual system includes an image generator that would be used, for example, to produce the faces that we see during dreams. Running this generator in reverse would allow the brain to work backward from the image and infer what kind of face or other object would produce that image, the researchers say.</p> <p>However, the question remained: How could the brain perform this process, known as inverse graphics, so quickly? Computer scientists have tried to create algorithms that could perform this feat, but the best previous systems require many cycles of iterative processing, taking much longer than the 100 to 200 milliseconds the brain requires to create a detailed visual representation of what you’re seeing. Neuroscientists believe perception in the brain can proceed so quickly because it is implemented in a mostly feedforward pass through several hierarchically organized layers of neural processing.</p> <p>The MIT-led team set out to build a special kind of deep neural network model to show how a neural hierarchy can quickly infer the underlying features of a scene — in this case, a specific face. In contrast to the standard deep neural networks used in computer vision, which are trained from labeled data indicating the class of an object in the image, the researchers’ network is trained from a model that reflects the brain’s internal representations of what scenes with faces can look like.</p> <p>Their model thus learns to reverse the steps performed by a computer graphics program for generating faces. These graphics programs begin with a three-dimensional representation of an individual face and then convert it into a two-dimensional image, as seen from a particular viewpoint. These images can be placed on an arbitrary background image. The researchers theorize that the brain’s visual system may do something similar when you dream or conjure a mental image of someone’s face.</p> <p>The researchers trained their deep neural network to perform these steps in reverse — that is, it begins with the 2D image and then adds features such as texture, curvature, and lighting, to create what the researchers call a “2.5D” representation. These 2.5D images specify the shape and color of the face from a particular viewpoint. Those are then converted into 3D representations, which don’t depend on the viewpoint.</p> <p>“The model gives a systems-level account of the processing of faces in the brain, allowing it to see an image and ultimately arrive at a 3D object, which includes representations of shape and texture, through this important intermediate stage of a 2.5D image,” Yildirim says.</p> <p><strong>Model performance</strong></p> <p>The researchers found that their model is consistent with data obtained by studying certain regions in the brains of macaque monkeys. In a study published in 2010, Freiwald and Doris Tsao of Caltech recorded the activity of neurons in those regions and analyzed how they responded to 25 different faces, seen from seven different viewpoints. That study revealed three stages of higher-level face processing, which the MIT team now hypothesizes correspond to three stages of their inverse graphics model: roughly, a 2.5D viewpoint-dependent stage; a stage that bridges from 2.5 to 3D; and a 3D, viewpoint-invariant stage of face representation.</p> <p>“What we show is that both the quantitative and qualitative response properties of those three levels of the brain seem to fit remarkably well with the top three levels of the network that we’ve built,” Tenenbaum says.</p> <p>The researchers also compared the model’s performance to that of humans in a task that involves recognizing faces from different viewpoints. This task becomes harder when researchers alter the faces by removing the face’s texture while preserving its shape, or distorting the shape while preserving relative texture. The new model’s performance was much more similar to that of humans than computer models used in state-of-the-art face-recognition software, additional evidence that this model may be closer to mimicking what happens in the human visual system.</p> <p>“This work is exciting because it introduces interpretable stages of intermediate representation into a feedforward neural network model of face recognition,” says Nikolaus Kriegeskorte, a professor of psychology and neuroscience at Columbia University, who was not involved in the research. “Their approach merges the classical idea that vision inverts a model of how the image was generated, with modern deep feedforward networks. It’s very interesting that this model better explains neural representations and behavioral responses.”</p> <p>The researchers now plan to continue testing the modeling approach on additional images, including objects that aren’t faces, to investigate whether inverse graphics might also explain how the brain perceives other kinds of scenes. In addition, they believe that adapting this approach to computer vision could lead to better-performing AI systems.</p> <p>“If we can show evidence that these models might correspond to how the brain works, this work could lead computer vision researchers to take more seriously and invest more engineering resources in this inverse graphics approach to perception,” Tenenbaum says. “The brain is still the gold standard for any kind of machine that sees the world richly and quickly.”</p> <p>The research was funded by the Center for Brains, Minds, and Machines at MIT, the National Science Foundation, the National Eye Institute, the Office of Naval Research, the New York Stem Cell Foundation, the Toyota Research Institute, and Mitsubishi Electric.</p> MIT cognitive scientists have developed a computer model of face recognition that performs a series of computations that reverse the steps that a computer graphics program would use to generate a 2D representation of a face.Image: courtesy of the researchersResearch, Computer vision, Brain and cognitive sciences, Center for Brains Minds and Machines, Computer Science and Artificial Intelligence Laboratory (CSAIL), School of Science, School of Engineering, National Science Foundation (NSF), Artificial intelligence, Machine learning, Neuroscience Empowering faculty partnerships across the globe http://news.mit.edu/2020/empowering-faculty-partnerships-across-globe-0303 MISTI Global Seed Funds program has delivered $22 million to faculty since 2008. Tue, 03 Mar 2020 12:20:01 -0500 MISTI http://news.mit.edu/2020/empowering-faculty-partnerships-across-globe-0303 <p>MIT faculty share their creative and technical talent on campus as well as across the globe, compounding the Institute’s impact through strong international partnerships. Thanks to the MIT Global Seed Funds (GSF) program, managed by the MIT International Science and Technology Initiatives (<a href="http://misti.mit.edu/" target="_blank">MISTI</a>), more of these faculty members will be able to build on these relationships to develop ideas and create new projects.</p> <p>“This MISTI fund was extremely helpful in consolidating our collaboration and has been the start of a long-term interaction between the two teams,” says 2017 GSF awardee Mehrdad Jazayeri, associate professor of brain and cognitive sciences and investigator at the McGovern Institute for Brain Research. “We have already submitted multiple abstracts to conferences together, mapped out several ongoing projects, and secured international funding thanks to the preliminary progress this seed fund enabled.”</p> <p>This year, the 28 funds that comprise MISTI GSF received 232 MIT applications. Over $2.3 million was awarded to 107 projects from 23 departments across the entire Institute. This brings the amount awarded to $22 million over the 12-year life of the program. Besides supporting faculty, these funds also provide meaningful educational opportunities for students. The majority of GSF teams include students from MIT and international collaborators, bolstering both their research portfolios and global experience.</p> <p>“This project has had important impact on my grad student’s education and development. She was able to apply techniques she has learned to a new and challenging system, mentor an international student, participate in a major international meeting, and visit CEA,” says Professor of Chemistry Elizabeth Nolan, a 2017 GSF awardee.</p> <p>On top of these academic and research goals, students are actively broadening their cultural experience and scope. “The environment at CEA differs enormously from MIT because it is a national lab and because lab structure and graduate education in France is markedly different than at MIT,” Nolan continues. “At CEA, she had the opportunity to present research to distinguished international colleagues.”</p> <p>These impactful partnerships unite faculty teams behind common goals to tackle worldwide challenges, helping to develop solutions that would not be possible without international collaboration. 2017 GSF winner Emilio Bizzi, professor emeritus of brain and cognitive sciences and emeritus investigator at the McGovern Institute, articulated the advantage of combining these individual skills within a high-level team. “The collaboration among researchers was valuable in sharing knowledge, experience, skills and techniques … as well as offering the probability of future development of systems to aid in rehabilitation of patients suffering TBI.”</p> <p>The research opportunities that grow from these seed funds often lead to published papers and additional funding leveraged from early results. The next call for proposals will be in mid-May.</p> <p>MISTI creates applied international learning opportunities for MIT students that increase their ability to understand and address real-world problems. MISTI collaborates with partners at MIT and beyond, serving as a vital nexus of international activity and bolstering the Institute’s research mission by promoting collaborations between MIT faculty members and their counterparts abroad.</p> Left to right: The Machu Picchu Design Heritage project is a past Global Seed Fund recipient. Paloma Gonzalez, Takehiko Nagakura, Chang Liu, and Wenzhe Peng pose with a panoramic view of Machu Picchu in Peru. They are part of an MIT team that has worked to digitally document the site.Photo: MISTIMISTI, McGovern Institute, Brain and cognitive sciences, School of Humanities Arts and Social Sciences, Research, Faculty, Funding, Global, Center for International Studies The neural basis of sensory hypersensitivity http://news.mit.edu/2020/neural-basis-of-sensory-hypersensitivity-0302 A new study may explain why people with autism are often highly sensitive to light and noise. Mon, 02 Mar 2020 11:00:00 -0500 Anne Trafton | MIT News Office http://news.mit.edu/2020/neural-basis-of-sensory-hypersensitivity-0302 <p>Many people with autism spectrum disorders are highly sensitive to light, noise, and other sensory input. A new study in mice reveals a neural circuit that appears to underlie this hypersensitivity, offering a possible strategy for developing new treatments.</p> <p>MIT and Brown University neuroscientists found that mice lacking a protein called Shank3, which has been previously linked with autism, were more sensitive to a touch on their whiskers than genetically normal mice. These Shank3-deficient mice also had overactive excitatory neurons in a region of the brain called the somatosensory cortex, which the researchers believe accounts for their over-reactivity.</p> <p>There are currently no treatments for sensory hypersensitivity, but the researchers believe that uncovering the cellular basis of this sensitivity may help scientists to develop potential treatments.</p> <p>“We hope our studies can point us to the right direction for the next generation of treatment development,” says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research.</p> <p>Feng and Christopher Moore, a professor of neuroscience at Brown University, are the senior authors of the paper, which appears today in <em>Nature Neuroscience</em>. McGovern Institute research scientist Qian Chen and Brown postdoc Christopher Deister are the lead authors of the study.</p> <p><strong>Too much excitation</strong></p> <p>The Shank3 protein is important for the function of synapses — connections that allow neurons to communicate with each other. Feng has previously shown that mice lacking the Shank3 gene display many <a href="http://news.mit.edu/2011/autistic-mouse-0321">traits associated with autism</a>, including avoidance of social interaction, and compulsive, repetitive behavior.</p> <p>In the new study, Feng and his colleagues set out to study whether these mice also show sensory hypersensitivity. For mice, one of the most important sources of sensory input is the whiskers, which help them to navigate and to maintain their balance, among other functions.</p> <p>The researchers developed a way to measure the mice’s sensitivity to slight deflections of their whiskers, and then trained the mutant Shank3 mice and normal (“wild-type”) mice to display behaviors that signaled when they felt a touch to their whiskers. They found that mice that were missing Shank3 accurately reported very slight deflections that were not noticed by the normal mice.</p> <p>“They are very sensitive to weak sensory input, which barely can be detected by wild-type mice,” Feng says. “That is a direct indication that they have sensory over-reactivity.”</p> <p>Once they had established that the mutant mice experienced sensory hypersensitivity, the researchers set out to analyze the underlying neural activity. To do that, they used an <a href="http://news.mit.edu/2011/autistic-mouse-0321">imaging technique</a> that can measure calcium levels, which indicate neural activity, in specific cell types.</p> <p>They found that when the mice’s whiskers were touched, excitatory neurons in the somatosensory cortex were overactive. This was somewhat surprising because when Shank3 is missing, synaptic activity should drop. That led the researchers to hypothesize that the root of the problem was low levels of Shank3 in the inhibitory neurons that normally turn down the activity of excitatory neurons. Under that hypothesis, diminishing those inhibitory neurons’ activity would allow excitatory neurons to go unchecked, leading to sensory hypersensitivity.</p> <p>To test this idea, the researchers genetically engineered mice so that they could turn off Shank3 expression exclusively in inhibitory neurons of the somatosensory cortex. As they had suspected, they found that in these mice, excitatory neurons were overactive, even though those neurons had normal levels of Shank3.</p> <p>“If you only delete Shank3 in the inhibitory neurons in the somatosensory cortex, and the rest of the brain and the body is normal, you see a similar phenomenon where you have hyperactive excitatory neurons and increased sensory sensitivity in these mice,” Feng says.</p> <p><strong>Reversing hypersensitivity</strong></p> <p>The results suggest that reestablishing normal levels of neuron activity could reverse this kind of hypersensitivity, Feng says.</p> <p>“That gives us a cellular target for how in the future we could potentially modulate the inhibitory neuron activity level, which might be beneficial to correct this sensory abnormality,” he says.</p> <p>Many other studies in mice have linked defects in inhibitory neurons to neurological disorders, including Fragile X syndrome and Rett syndrome, as well as autism.</p> <p>“Our study is one of several that provide a direct and causative link between inhibitory defects and sensory abnormality, in this model at least,” Feng says. “It provides further evidence to support inhibitory neuron defects as one of the key mechanisms in models of autism spectrum disorders.”</p> <p>He now plans to study the timing of when these impairments arise during an animal’s development, which could help to guide the development of possible treatments. There are existing drugs that can turn down excitatory neurons, but these drugs have a sedative effect if used throughout the brain, so more targeted treatments could be a better option, Feng says.</p> <p>“We don’t have a clear target yet, but we have a clear cellular phenomenon to help guide us,” he says. “We are still far away from developing a treatment, but we’re happy that we have identified defects that point in which direction we should go.”</p> <p>The research was funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, the Nancy Lurie Marks Family Foundation, the Poitras Center for Psychiatric Disorders Research at the McGovern Institute, the Varanasi Family, R. Buxton, and the National Institutes of Health.</p> MIT neuroscientists have discovered a brain circuit that appears to contribute to the sensory hypersensitivity often seen in people with autism spectrum disorders.Image: Jose-Luis Olivares, MITResearch, Autism, Brain and cognitive sciences, McGovern Institute, Neuroscience, School of Science, National Institutes of Health (NIH) Demystifying the world of deep networks http://news.mit.edu/2020/demystifying-world-deep-networks-0228 Researchers discover that no magic is required to explain why deep networks generalize despite going against statistical intuition. Fri, 28 Feb 2020 14:40:01 -0500 Kris Brewer | Center for Brains, Minds and Machines http://news.mit.edu/2020/demystifying-world-deep-networks-0228 <p>Introductory statistics courses teach us that, when fitting a model to some data, we should have more data than free parameters to avoid the danger of overfitting — fitting noisy data too closely, and thereby failing to fit new data. It is surprising, then, that in modern deep learning the practice is to have orders of magnitude more parameters than data. Despite this, deep networks show good predictive performance, and in fact do better the more parameters they have. Why would that be?</p> <p>It has been known for some time that good performance in machine learning comes from controlling the complexity of networks, which is not just a simple function of the number of free parameters. The complexity of a classifier, such as a neural network, depends on measuring the “size” of the space of functions that this network represents, with multiple technical measures previously suggested: Vapnik–Chervonenkis dimension, covering numbers, or Rademacher complexity, to name a few. Complexity, as measured by these notions, can be controlled during the learning process by imposing a constraint on the norm of the parameters — in short, on how “big” they can get. The surprising fact is that no such explicit constraint seems to be needed in training deep networks. Does deep learning lie outside of the classical learning theory? Do we need to rethink the foundations?</p> <p>In a new <em>Nature Communications</em> paper, “Complexity Control by Gradient Descent in Deep Networks,” a team from the Center for Brains, Minds, and Machines led by Director Tomaso Poggio, the Eugene McDermott Professor in the MIT Department of Brain and Cognitive Sciences, has shed some light on this puzzle by addressing the most practical and successful applications of modern deep learning: classification problems.</p> <p>“For classification problems, we observe that in fact the parameters of the model do not seem to converge, but rather grow in size indefinitely during gradient descent. However, in classification problems only the normalized parameters matter — i.e., the direction they define, not their size,” says co-author and MIT PhD candidate Qianli Liao. “The not-so-obvious thing we showed is that the commonly used gradient descent on the unnormalized parameters induces the desired complexity control on the normalized ones.”</p> <p>“We have known for some time in the case of regression for shallow linear networks, such as kernel machines, that iterations of gradient descent provide an implicit, vanishing regularization effect,” Poggio says. “In fact, in this simple case we probably know that we get the best-behaving maximum-margin, minimum-norm solution. The question we asked ourselves, then, was: Can something similar happen for deep networks?”</p> <p>The researchers found that it does. As co-author and MIT postdoc Andrzej Banburski explains, “Understanding convergence in deep networks shows that there are clear directions for improving our algorithms. In fact, we have already seen hints that controlling the rate at which these unnormalized parameters diverge allows us to find better performing solutions and find them faster.”</p> <p>What does this mean for machine learning? There is no magic behind deep networks. The same theory behind all linear models is at play here as well. This work suggests ways to improve deep networks, making them more accurate and faster to train.</p> MIT researchers (left to right) Qianli Liao, Tomaso Poggio, and Andrzej Banburski stand with their equations. Image: Kris BrewerCenter for Brains Minds and Machines, Brain and cognitive sciences, Electrical engineering and computer science (EECS), Machine learning, Artificial intelligence, Research Bringing artificial intelligence into the classroom, research lab, and beyond http://news.mit.edu/2020/bringing-artificial-intelligence-classroom-research-lab-and-beyond-0213 Through the Undergraduate Research Opportunities Program, students work to build AI tools with impact. Thu, 13 Feb 2020 16:50:01 -0500 Kim Martineau | MIT Quest for Intelligence http://news.mit.edu/2020/bringing-artificial-intelligence-classroom-research-lab-and-beyond-0213 <p>Artificial intelligence is reshaping how we live, learn, and work, and this past fall, MIT undergraduates got to explore and build on some of the tools coming out of research labs at MIT. Through the&nbsp;<a href="http://uaap.mit.edu/research-exploration/urop" target="_blank">Undergraduate Research Opportunities Program</a>&nbsp;(UROP), students worked with researchers at the MIT Quest for Intelligence and elsewhere on projects to improve AI literacy and K-12 education, understand face recognition and how the brain forms new memories, and speed up tedious tasks like cataloging new library material. Six projects are featured below.</p> <p><strong>Programming Jibo to forge an emotional bond with kids</strong></p> <p>Nicole Thumma met her first robot when she was 5, at a museum.&nbsp;“It was incredible that I could have a conversation, even a simple conversation, with this machine,” she says. “It made me think&nbsp;robots&nbsp;are&nbsp;the most complicated manmade thing, which made me want to learn more about them.”</p> <p>Now a senior at MIT, Thumma spent last fall writing dialogue for the social robot Jibo, the brainchild of&nbsp;<a href="https://www.media.mit.edu/">MIT Media Lab</a> Associate Professor&nbsp;<a href="https://www.media.mit.edu/people/cynthiab/overview/">Cynthia Breazeal</a>. In a UROP project co-advised by Breazeal and researcher&nbsp;<a href="https://www.media.mit.edu/people/haewon/overview/">Hae Won Park</a>, Thumma scripted mood-appropriate dialogue to help Jibo bond with students while playing learning exercises together.</p> <p>Because emotions are complicated, Thumma riffed on a set of basic feelings in her dialogue — happy/sad, energized/tired, curious/bored. If Jibo was feeling sad, but energetic and curious, she might program it to say, “I'm feeling blue today, but something that always cheers me up is talking with my friends, so I'm glad I'm playing with you.​” A tired, sad, and bored Jibo might say, with a tilt of its head, “I don't feel very good. It's like my wires are all mixed up today. I think this activity will help me feel better.”&nbsp;</p> <p>In these brief interactions, Jibo models its vulnerable side and teaches kids how to express their emotions. At the end of an interaction, kids can give Jibo a virtual token to pick up its mood or energy level. “They can see what impact they have on others,” says Thumma. In all, she wrote 80 lines of dialogue, an experience that led to her to stay on at MIT for an MEng in robotics. The Jibos she helped build are now in kindergarten classrooms in Georgia, offering emotional and intellectual support as they read stories and play word games with their human companions.</p> <p><strong>Understanding why familiar faces stand out</strong></p> <p>With a quick glance, the faces of friends and acquaintances jump out from those of strangers. How does the brain do it?&nbsp;<a href="https://mcgovern.mit.edu/profile/nancy-kanwisher/">Nancy Kanwisher</a>’s lab in the&nbsp;<a href="https://bcs.mit.edu/">Department of Brain and Cognitive Sciences</a> (BCS) is building computational models to understand the face-recognition process.&nbsp;<a href="http://news.mit.edu/2019/human-brain-face-recognition-0322">Two key findings</a>: the brain starts to register the gender and age of a face before recognizing its identity, and that face perception is more robust for familiar faces.</p> <p>This fall, second-year student Joanne Yuan worked with postdoc&nbsp;<a href="http://www.katharinadobs.com/">Katharina Dobs</a>&nbsp;to understand&nbsp;why this is so.&nbsp;In earlier experiments, subjects were shown multiple photographs of familiar faces of American celebrities and unfamiliar faces of German celebrities while their brain activity was measured with magnetoencephalography. Dobs found that subjects processed age and gender before the celebrities’ identity regardless of whether the face was familiar. But they were much better at unpacking the gender and identity of faces they knew, like Scarlett Johansson, for example. Dobs suggests that the improved gender and identity recognition for familiar faces is due to a feed-forward mechanism rather than top-down retrieval of information from memory.&nbsp;</p> <p>Yuan has explored both hypotheses with a type of model, convolutional neural networks (CNNs), now widely used in face-recognition tools. She trained a CNN on the face images and studied its layers to understand its processing steps. She found that the model, like Dobs’ human subjects, appeared to process gender and age before identity, suggesting that both CNNs and the brain are primed for face recognition in similar ways. In another experiment, Yuan trained two CNNs on familiar and unfamiliar faces and found that the CNNs, again like humans, were better at identifying the familiar faces.</p> <p>Yuan says she enjoyed exploring two fields — machine learning and neuroscience — while gaining an appreciation for the simple act of recognizing faces. “It’s pretty complicated and there’s so much more to learn,” she says.</p> <p><strong>Exploring memory formation</strong></p> <p>Protruding from the branching dendrites of brain cells are microscopic nubs that grow and change shape as memories form. Improved imaging techniques have allowed researchers to move closer to these nubs, or spines, deep in the brain to learn more about their role in creating and consolidating memories.</p> <p><a href="https://tonegawalab.mit.edu/susumu-tonegawa/">Susumu Tonegawa</a>, the Picower Professor of Biology and Neuroscience, has&nbsp;pioneered a technique for labeling clusters of brain cells, called “engram cells,” that are linked to specific memories in mice. Through conditioning, researchers train a mouse, for example, to recognize an environment. By tracking the evolution of dendritic spines in cells linked to a single memory trace, before and after the learning episode, researchers can estimate where memories may be physically stored.&nbsp;</p> <p>But it takes time. Hand-labeling spines in a stack of 100 images can take hours — more, if the researcher needs to consult images from previous days to verify that a spine-like nub really is one, says&nbsp;Timothy O’Connor, a software engineer in BCS helping with the project.&nbsp;With 400 images taken in a typical session, annotating the images can take longer than collecting them, he adds.</p> <p>O’Connor&nbsp;contacted the Quest&nbsp;<a href="https://bridge.mit.edu/">Bridge</a>&nbsp;to see if the process could be automated. Last fall, undergraduates Julian Viera and Peter Hart began work with Bridge AI engineer Katherine Gallagher to train a neural network to automatically pick out the spines. Because spines vary widely in shape and size, teaching the computer what to look for is one big challenge facing the team as the work continues. If successful, the tool could be useful to a hundred other labs across the country.</p> <p>“It’s exciting to work on a project that could have a huge amount of impact,” says Viera. “It’s also cool to be learning something new in computer science and neuroscience.”</p> <p><strong>Speeding up the archival process</strong></p> <p>Each year, Distinctive Collections at the MIT Libraries receives&nbsp;a large volume of personal letters, lecture notes, and other materials from donors inside and outside of MIT&nbsp;that tell MIT’s story and document the history of science and technology.&nbsp;Each of these unique items must be organized and described, with a typical box of material taking up to 20 hours to process and make available to users.</p> <p>To make the work go faster, Andrei Dumitrescu and Efua Akonor, undergraduates at MIT and Wellesley College respectively, are working with Quest Bridge’s Katherine Gallagher to develop an automated system for processing archival material donated to MIT. Their goal: to&nbsp;develop a machine-learning pipeline that can categorize and extract information from scanned images of the records. To accomplish this task, they turned to the U.S. Library of Congress (LOC), which has digitized much of its extensive holdings.&nbsp;</p> <p>This past fall, the students pulled images of about&nbsp;70,000 documents, including correspondence, speeches, lecture notes, photographs, and books&nbsp;housed at the LOC, and trained a classifier to distinguish a letter from, say, a speech. They are now using optical character recognition and a text-analysis tool&nbsp;to extract key details like&nbsp;the date, author, and recipient of a letter, or the date and topic of a lecture. They will soon incorporate object recognition to describe the content of a&nbsp;photograph,&nbsp;and are looking forward to&nbsp;testing&nbsp;their system on the MIT Libraries’ own digitized data.</p> <p>One&nbsp;highlight of the project was learning to use Google Cloud. “This is the real world, where there are no directions,” says Dumitrescu. “It was fun to figure things out for ourselves.”&nbsp;</p> <p><strong>Inspiring the next generation of robot engineers</strong></p> <p>From smartphones to smart speakers, a growing number of devices live in the background of our daily lives, hoovering up data. What we lose in privacy we gain in time-saving personalized recommendations and services. It’s one of AI’s defining tradeoffs that kids should understand, says third-year student Pablo&nbsp;Alejo-Aguirre.&nbsp;“AI brings us&nbsp;beautiful and&nbsp;elegant solutions, but it also has its limitations and biases,” he says.</p> <p>Last year, Alejo-Aguirre worked on an AI literacy project co-advised by Cynthia Breazeal and graduate student&nbsp;<a href="https://www.media.mit.edu/people/randiw12/publications/">Randi Williams</a>. In collaboration with the nonprofit&nbsp;<a href="https://i2learning.org/">i2 Learning</a>, Breazeal’s lab has developed an AI curriculum around a robot named Gizmo that teaches kids how to&nbsp;<a href="https://www.media.mit.edu/projects/ai-5-8/overview/">train their own robot</a>&nbsp;with an Arduino micro-controller and a user interface based on Scratch-X, a drag-and-drop programming language for children.&nbsp;</p> <p>To make Gizmo accessible for third-graders, Alejo-Aguirre developed specialized programming blocks that give the robot simple commands like, “turn left for one second,” or “move forward for one second.” He added Bluetooth to control Gizmo remotely and simplified its assembly, replacing screws with acrylic plates that slide and click into place. He also gave kids the choice of rabbit and frog-themed Gizmo faces.&nbsp;“The new design is a lot sleeker and cleaner, and the edges are more kid-friendly,” he says.&nbsp;</p> <p>After building and testing several prototypes, Alejo-Aguirre and Williams demoed their creation last summer at a robotics camp. This past fall, Alejo-Aguirre manufactured 100 robots that are now in two schools in Boston and a third in western Massachusetts.&nbsp;“I’m proud of the technical breakthroughs I made through designing, programming, and building the robot, but I’m equally proud of the knowledge that will be shared through this curriculum,” he says.</p> <p><strong>Predicting stock prices with machine learning</strong></p> <p>In search of a practical machine-learning application to learn more about the field, sophomores Dolapo Adedokun and Daniel Adebi hit on stock picking. “We all know buy, sell, or hold,” says Adedokun. “We wanted to find an easy challenge that anyone could relate to, and develop a guide for how to use machine learning in that context.”</p> <p>The two friends approached the Quest Bridge with their own idea for a UROP project after they were turned away by several labs because of their limited programming experience, says Adedokun. Bridge engineer Katherine Gallagher, however, was willing to take on novices. “We’re building machine-learning tools for non-AI specialists,” she says. “I was curious to see how Daniel and Dolapo would approach the problem and reason through the questions they encountered.”</p> <p>Adebi wanted to learn more about reinforcement learning, the trial-and-error AI technique that has allowed computers to surpass humans at chess, Go, and a growing list of video games. So, he and Adedokun worked with Gallagher to structure an experiment to see how reinforcement learning would fare against another AI technique, supervised learning, in predicting stock prices.</p> <p>In reinforcement learning, an agent is turned loose in an unstructured environment with one objective: to maximize a specific outcome (in this case, profits) without being told explicitly how to do so. Supervised learning, by contrast, uses labeled data to accomplish a goal, much like a problem set with the correct answers included.</p> <p>Adedokun and Adebi trained both models on seven years of stock-price data, from 2010-17, for Amazon, Microsoft, and Google. They then compared profits generated by the reinforcement learning model and a trading algorithm based on the supervised model’s price predictions for the following 18 months; they found that their reinforcement learning model produced higher returns.</p> <p>They developed a Jupyter notebook to share what they learned and explain how they built and tested their models. “It was a valuable exercise for all of us,” says Gallagher. “Daniel and Dolapo got hands-on experience with machine-learning fundamentals, and I got insight into the types of obstacles users with their background might face when trying to use the tools we’re building at the Bridge.”</p> Students participating in MIT Quest for Intelligence-funded UROP projects include: (clockwise from top left) Nicole Thumma, Joanne Yuan, Julian Viera, Andrei Dumitrescu, Pablo Alejo-Aguirre, and Dolapo Adedokun.Photo panel: Samantha SmileyQuest for Intelligence, Brain and cognitive sciences, Media Lab, Libraries, School of Engineering, School of Science, Artifical intelligence, Algorithms, Computer science and technology, Machine learning, Undergraduate Research Opportunities Program (UROP), Students, Undergraduate, Electrical engineering and computer science (EECS) Bridging the gap between human and machine vision http://news.mit.edu/2020/bridging-gap-between-human-and-machine-vision-0211 Researchers develop a more robust machine-vision architecture by studying how human vision responds to changing viewpoints of objects. Tue, 11 Feb 2020 16:40:01 -0500 Kris Brewer | Center for Brains, Minds and Machines http://news.mit.edu/2020/bridging-gap-between-human-and-machine-vision-0211 <p>Suppose you look briefly from a few feet away at a person you have never met before. Step back a few paces and look again. Will you be able to recognize her face? “Yes, of course,” you probably are thinking. If this is true, it would mean that our visual system, having seen a single image of an object such as a specific face, recognizes it robustly despite changes to the object’s position and scale, for example. On the other hand, we know that state-of-the-art classifiers, such as vanilla deep networks, will fail this simple test.</p> <p>In order to recognize a specific face under a range of transformations, neural networks need to be trained with many examples of the face under the different conditions. In other words, they can achieve invariance through memorization, but cannot do it if only one image is available. Thus, understanding how human vision can pull off this remarkable feat is relevant for engineers aiming to improve their existing classifiers. It also is important for neuroscientists modeling the primate visual system with deep networks. In particular, it is possible that the invariance with one-shot learning exhibited by biological vision requires a rather different computational strategy than that of deep networks.&nbsp;</p> <p>A new paper by MIT PhD candidate in electrical engineering and computer science Yena Han and colleagues in <em>Nature Scientific Reports</em> entitled “Scale and translation-invariance for novel objects in human vision” discusses how they study this phenomenon more carefully to create novel biologically inspired networks.</p> <p>"Humans can learn from very few examples, unlike deep networks. This is a huge difference with vast implications for engineering of vision systems and for understanding how human vision really works," states co-author Tomaso Poggio — director of the Center for Brains, Minds and Machines (CBMM) and the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT. "A key reason for this difference is the relative invariance of the primate visual system to scale, shift, and other transformations. Strangely, this has been mostly neglected in the AI community, in part because the psychophysical data were so far less than clear-cut. Han's work has now established solid measurements of basic invariances of human vision.”</p> <p>To differentiate invariance rising from intrinsic computation with that from experience and memorization, the new study measured the range of invariance in one-shot learning. A one-shot learning task was performed by presenting Korean letter stimuli to human subjects who were unfamiliar with the language. These letters were initially presented a single time under one specific condition and tested at different scales or positions than the original condition. The first experimental result is that — just as you guessed — humans showed significant scale-invariant recognition after only a single exposure to these novel objects. The second result is that the range of position-invariance is limited, depending on the size and placement of objects.</p> <p>Next, Han and her colleagues performed a comparable experiment in deep neural networks designed to reproduce this human performance. The results suggest that to explain invariant recognition of objects by humans, neural network models should explicitly incorporate built-in scale-invariance. In addition, limited position-invariance of human vision is better replicated in the network by having the model neurons’ receptive fields increase as they are further from the center of the visual field. This architecture is different from commonly used neural network models, where an image is processed under uniform resolution with the same shared filters.</p> <p>“Our work provides a new understanding of the brain representation of objects under different viewpoints. It also has implications for AI, as the results provide new insights into what is a good architectural design for deep neural networks,” remarks Han, CBMM researcher and lead author of the study.</p> <p>Han and Poggio were joined by Gemma Roig and Gad Geiger in the work.</p> Yena Han (left) and Tomaso Poggio stand with an example of the visual stimuli used in a new psychophysics study.Photo: Kris BrewerCenter for Brains Minds and Machines, Brain and cognitive sciences, Machine learning, Artificial intelligence, Computer vision, Research, School of Science, Computer science and technology, Electrical Engineering & Computer Science (eecs), School of Engineering Brainstorming energy-saving hacks on Satori, MIT’s new supercomputer http://news.mit.edu/2020/brainstorming-energy-saving-hacks-satori-mit-supercomputer-0211 Three-day hackathon explores methods for making artificial intelligence faster and more sustainable. Tue, 11 Feb 2020 11:50:01 -0500 Kim Martineau | MIT Quest for Intelligence http://news.mit.edu/2020/brainstorming-energy-saving-hacks-satori-mit-supercomputer-0211 <p>Mohammad Haft-Javaherian planned to spend an hour at the&nbsp;<a href="http://bit.ly/2ReKlio">Green AI Hackathon</a>&nbsp;— just long enough to get acquainted with MIT’s new supercomputer,&nbsp;<a href="https://news.mit.edu/2019/ibm-gives-lift-artificial-intelligence-computing-mit-0826">Satori</a>. Three days later, he walked away with $1,000 for his winning strategy to shrink the carbon footprint of artificial intelligence models trained to detect heart disease.&nbsp;</p> <p>“I never thought about the kilowatt-hours I was using,” he says. “But this hackathon gave me a chance to look at my carbon footprint and find ways to trade a small amount of model accuracy for big energy savings.”&nbsp;</p> <p>Haft-Javaherian was among six teams to earn prizes at a hackathon co-sponsored by the&nbsp;<a href="https://researchcomputing.mit.edu/index">MIT Research Computing Project</a>&nbsp;and&nbsp;<a href="https://mitibmwatsonailab.mit.edu/">MIT-IBM Watson AI Lab</a> Jan. 28-30. The event was meant to familiarize students with Satori, the computing cluster IBM&nbsp;<a href="http://news.mit.edu/2019/ibm-gives-lift-artificial-intelligence-computing-mit-0826">donated</a> to MIT last year, and to inspire new techniques for building energy-efficient AI models that put less planet-warming carbon dioxide into the air.&nbsp;</p> <p>The event was also a celebration of Satori’s green-computing credentials. With an architecture designed to minimize the transfer of data, among other energy-saving features, Satori recently earned&nbsp;<a href="https://www.top500.org/green500/lists/2019/11/">fourth place</a>&nbsp;on the Green500 list of supercomputers. Its location gives it additional credibility: It sits on a remediated brownfield site in Holyoke, Massachusetts, now the&nbsp;<a href="https://www.mghpcc.org/">Massachusetts Green High Performance Computing Center</a>, which runs largely on low-carbon hydro, wind and nuclear power.</p> <p>A postdoc at MIT and Harvard Medical School, Haft-Javaherian came to the hackathon to learn more about Satori. He stayed for the challenge of trying to cut the energy intensity of his own work, focused on developing AI methods to screen the coronary arteries for disease. A new imaging method, optical coherence tomography, has given cardiologists a new tool for visualizing defects in the artery walls that can slow the flow of oxygenated blood to the heart. But even the experts can miss subtle patterns that computers excel at detecting.</p> <p>At the hackathon, Haft-Javaherian ran a test on his model and saw that he could cut its energy use eight-fold by reducing the time Satori’s graphics processors sat idle. He also experimented with adjusting the model’s number of layers and features, trading varying degrees of accuracy for lower energy use.&nbsp;</p> <p>A second team, Alex Andonian and Camilo Fosco, also won $1,000 by showing they could train a classification model nearly 10 times faster by optimizing their code and losing a small bit of accuracy. Graduate students in the Department of Electrical Engineering and Computer Science (EECS), Andonian and Fosco are currently training a classifier to tell legitimate videos from AI-manipulated fakes, to compete in Facebook’s&nbsp;<a href="https://ai.facebook.com/blog/deepfake-detection-challenge/">Deepfake Detection Challenge</a>. Facebook launched the contest last fall to crowdsource ideas for stopping the spread of misinformation on its platform ahead of the 2020 presidential election.</p> <p>If a technical solution to deepfakes is found, it will need to run on millions of machines at once, says Andonian. That makes energy efficiency key. “Every optimization we can find to train and run more efficient models will make a huge difference,” he says.</p> <p>To speed up the training process, they tried streamlining their code and lowering the resolution of their 100,000-video training set by eliminating some frames. They didn’t expect a solution in three days, but Satori’s size worked in their favor. “We were able to run 10 to 20 experiments at a time, which let us iterate on potential ideas and get results quickly,” says Andonian.&nbsp;</p> <p>As AI continues to improve at tasks like reading medical scans and interpreting video, models have grown bigger and more calculation-intensive, and thus, energy intensive. By one&nbsp;<a href="https://www.technologyreview.com/s/613630/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/">estimate</a>, training a large language-processing model produces nearly as much carbon dioxide as the cradle-to-grave emissions from five American cars. The footprint of the typical model is modest by comparison, but as AI applications proliferate its environmental impact is growing.&nbsp;</p> <p>One way to green AI, and tame the exponential growth in demand for training AI, is to build smaller models. That’s the approach that a third hackathon competitor, EECS graduate student Jonathan Frankle, took. Frankle is looking for signals early in the training process that point to subnetworks within the larger, fully-trained network that can do the same job.&nbsp;The idea builds on his award-winning&nbsp;<a href="https://arxiv.org/pdf/1803.03635.pdf">Lottery Ticket Hypothesis</a>&nbsp;paper from last year that found a neural network could perform with 90 percent fewer connections if the right subnetwork was found early in training.</p> <p>The hackathon competitors were judged by John Cohn, chief scientist at the MIT-IBM Watson AI Lab, Christopher Hill, director of MIT’s Research Computing Project, and Lauren Milechin, a research software engineer at MIT.&nbsp;</p> <p>The judges recognized four&nbsp;other teams: Department of Earth, Atmospheric and Planetary Sciences (EAPS) graduate students Ali Ramadhan,&nbsp;Suyash Bire, and James Schloss,&nbsp;for adapting the programming language Julia for Satori; MIT Lincoln Laboratory postdoc Andrew Kirby, for adapting code he wrote as a graduate student to Satori using a library designed for easy programming of computing architectures; and Department of Brain and Cognitive Sciences graduate students Jenelle Feather and Kelsey Allen, for applying a technique that drastically simplifies models by cutting their number of parameters.</p> <p>IBM developers were on hand to answer questions and gather feedback.&nbsp;&nbsp;“We pushed the system — in a good way,” says Cohn. “In the end, we improved the machine, the documentation, and the tools around it.”&nbsp;</p> <p>Going forward, Satori will be joined in Holyoke by&nbsp;<a href="http://news.mit.edu/2019/lincoln-laboratory-ai-supercomputer-tx-gaia-0927">TX-Gaia</a>, Lincoln Laboratory’s new supercomputer.&nbsp;Together, they will provide feedback on the energy use of their workloads. “We want to raise awareness and encourage users to find innovative ways to green-up all of their computing,” says Hill.&nbsp;</p> Several dozen students participated in the Green AI Hackathon, co-sponsored by the MIT Research Computing Project and MIT-IBM Watson AI Lab. Photo panel: Samantha SmileyQuest for Intelligence, MIT-IBM Watson AI Lab, Electrical engineering and computer science (EECS), EAPS, Lincoln Laboratory, Brain and cognitive sciences, School of Engineering, School of Science, Algorithms, Artificial intelligence, Computer science and technology, Data, Machine learning, Software, Climate change, Awards, honors and fellowships, Hackathon, Special events and guest speakers A college for the computing age http://news.mit.edu/2020/college-for-the-computing-age-0204 With the initial organizational structure in place, the MIT Schwarzman College of Computing moves forward with implementation. Tue, 04 Feb 2020 12:30:01 -0500 Terri Park | MIT Schwarzman College of Computing http://news.mit.edu/2020/college-for-the-computing-age-0204 <p>The mission of the MIT Stephen A. Schwarzman College of Computing is to address the opportunities and challenges of the computing age — from hardware to software to algorithms to artificial intelligence (AI) — by transforming the capabilities of academia in three key areas: supporting the rapid evolution and growth of computer science and AI; facilitating collaborations between computing and other disciplines; and focusing on social and ethical responsibilities of computing through combining technological approaches and insights from social science and humanities, and through engagement beyond academia.</p> <p>Since starting his position in August 2019, Daniel Huttenlocher, the inaugural dean of the MIT Schwarzman College of Computing, has been working with many stakeholders in designing the initial organizational structure of the college. Beginning with the <a href="https://comptf.mit.edu/working-group-final-reports" target="_blank">College of Computing Task Force Working Group reports</a> and feedback from the MIT community, the structure has been developed through an iterative process of draft plans yielding a <a href="https://computing.mit.edu/sites/default/files/MITSchwarzmanCollegeStructure.pdf" target="_blank">26-page document</a> outlining the initial academic organization of the college that is designed to facilitate the college mission through improved coordination and evolution of existing computing programs at MIT, improved collaboration in computing across disciplines, and development of new cross-cutting activities and programs, notably in the social and ethical responsibilities of computing.</p> <p>“The MIT Schwarzman College of Computing is both bringing together existing MIT programs in computing and developing much-needed new cross-cutting educational and research programs,” says Huttenlocher. “For existing programs, the college helps facilitate coordination and manage the growth in areas such as computer science, artificial intelligence, data systems and society, and operations research, as well as helping strengthen interdisciplinary computing programs such as computational science and engineering. For new areas, the college is creating cross-cutting platforms for the study and practice of social and ethical responsibilities of computing, for multi-departmental computing education, and for incubating new interdisciplinary computing activities.”</p> <p>The following existing departments, institutes, labs, and centers are now part of the college:</p> <ul> <li>Department of Electrical Engineering and Computer (EECS), which has been <a href="http://news.mit.edu/2019/restructuring-mit-department-electrical-engineering-computer-science-1205" target="_self">reorganized</a> into three overlapping sub-units of electrical engineering (EE), computer science (CS), and artificial intelligence and decision-making (AI+D), and is jointly part of the MIT Schwarzman College of Computing and School of Engineering;</li> <li>Operations Research Center (ORC), which is jointly part of the MIT Schwarzman College of Computing and MIT Sloan School of Management;</li> <li>Institute for Data, Systems, and Society (IDSS), which will be increasing its focus on the societal aspects of its mission while also continuing to support statistics across MIT, and including the Technology and Policy Program (TPP) and Sociotechnical Systems Research Center (SSRC);</li> <li>Center for Computational Science Engineering (CCSE), which is being renamed from the Center for Computational Engineering and broadening its focus in the sciences;</li> <li>Computer Science and Artificial Intelligence Laboratory (CSAIL);</li> <li>Laboratory for Information and Decision Systems (LIDS); and</li> <li>Quest for Intelligence.</li> </ul> <p>With the initial structure in place, Huttenlocher, the college leadership team, and the leaders of the academic units that are part of the college, in collaboration with departments in all five schools, are actively moving forward with curricular and programmatic development, including the launch of two new areas, the Common Ground for Computing Education and the Social and Ethical Responsibilities of Computing (SERC). Still in the early planning stages, these programs are the aspects of the college that are designed to cut across lines and involve a number of departments throughout MIT. Other programs are expected to be introduced as the college continues to take shape.</p> <p>“The college is an Institute-wide entity, working with and across all five schools,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, who was part of the task force steering committee. “Its continued growth and focus depend greatly on the input of our MIT community, a process which began over a year ago. I’m delighted that Dean Huttenlocher and the college leadership team have engaged the community for collaboration and discussion around the plans for the college.”</p> <p>With these organizational changes, students, faculty, and staff in these units are members of the college, and in some cases, jointly with a school, as will be those who are engaged in the new cross-cutting activities in SERC and Common Ground. “A question we get frequently,” says Huttenlocher, “is how to apply to the college. As is the case throughout MIT, undergraduate admissions are handled centrally, and graduate admissions are handled by each individual department or graduate program.”<strong> </strong></p> <p><strong>Advancing computing</strong></p> <p>Despite the unprecedented growth in computing, there remains substantial unmet demand for expertise. In academia, colleges and universities worldwide are faced with oversubscribed programs in computer science and the constant need to keep up with rapidly changing materials at both the graduate and undergraduate level.</p> <p>According to Huttenlocher, the computing fields are evolving at a pace today that is beyond the capabilities of current academic structures to handle. “As academics, we pride ourselves on being generators of new knowledge, but academic institutions themselves don’t change that quickly. The rise of AI is probably the biggest recent example of that, along with the fact that about 40 percent of MIT undergraduates are majoring in computer science, where we have 7 percent of the MIT faculty.”</p> <p>In order to help meet this demand, MIT is increasing its academic capacity in computing and AI with 50 new faculty positions — 25 will be core computing positions in CS, AI, and related areas, and 25 will be shared jointly with departments. Searches are now active to recruit core faculty in CS and AI+D, and for joint faculty with MIT Philosophy, the Department of Brain and Cognitive Sciences, and several interdisciplinary institutes.</p> <p>The new shared faculty searches will largely be conducted around the concept of “clusters” to build capacity at MIT in important computing areas that cut across disciplines, departments, and schools. Huttenlocher, the provost, and the five school deans will work to identify themes based on input from departments so that recruiting can be undertaken during the next academic year.</p> <p><strong>Cross-cutting collaborations in computing</strong></p> <p>Building on the history of strong faculty participation in interdepartmental labs, centers, and initiatives, the MIT Schwarzman College of Computing provides several forms of membership in the college based on cross-cutting research, teaching, or external engagement activities. While computing is affecting intellectual inquiry in almost every discipline, Huttenlocher is quick to stress that “it’s bi-directional.” He notes that existing collaborations across various schools and departments, such as MIT Digital Humanities, as well as opportunities for new such collaborations, are key to the college mission because in the same way that “computing is changing thinking in the disciplines; the disciplines are changing the way people do computing.”</p> <p>Under the leadership of Asu Ozdaglar, the deputy dean of academics and department head of EECS, the college is developing the Common Ground for Computing Education, an interdepartmental teaching collaborative that will facilitate the offering of computing classes and coordination of computing-related curricula across academic units.</p> <p>The objectives of this collaborative are to provide opportunities for faculty across departments to work together, including co-teaching classes, creating new undergraduate majors or minors such as in AI+D, as well as facilitating undergraduate blended degrees such as 6-14 (Computer Science, Economics, and Data Science), 6-9 (Computation and Cognition), 11-6 (Urban Science and Planning with Computer Science), 18-C (Mathematics with Computer Science), and others.</p> <p>“It is exciting to bring together different areas of computing with methodological and substantive commonalities as well as differences around one table,” says Ozdaglar. “MIT faculty want to collaborate in topics around computing, but they are increasingly overwhelmed with teaching assignments and other obligations. I think the college will enable the types of interactions that are needed to foster new ideas.”</p> <p>Thinking about the impact on the student experience, Ozdaglar expects that the college will help students better navigate the computing landscape at MIT by creating clearer paths. She also notes that many students have passions beyond computer science, but realize the need to be adept in computing techniques and methodologies in order to pursue other interests, whether it be political science, economics, or urban science. “The idea for the college is to educate students who are fluent in computation, but at the same time, creatively apply computing with the methods and questions of the domain they are mostly interested in.”</p> <p>For Deputy Dean of Research Daniela Rus, who is also the director of CSAIL and the Andrew and Erna Viterbi Professor in EECS, developing research programs “that bring together MIT faculty and students from different units to advance computing and to make the world better through computing” is a top priority. She points to the recent launch of the <a href="http://news.mit.edu/2019/mit-and-us-air-force-sign-agreement-new-ai-accelerator-0520" target="_self">MIT Air Force AI Innovation Accelerator</a>, a collaboration between the MIT Schwarzman College of Computing and the U.S. Air Force focused on AI, as an example of the types of research projects the college can facilitate.</p> <p>“As humanity works to solve problems ranging from climate change to curing disease, removing inequality, ensuring sustainability, and eliminating poverty, computing opens the door to powerful new solutions,” says Rus. “And with the MIT Schwarzman College as our foundation, I believe MIT will be at the forefront of those solutions. Our scholars are laying theoretical foundations of computing and applying those foundations to big ideas in computing and across disciplines.”</p> <p><strong>Habits of mind and action</strong></p> <p>A critically important cross-cutting area is the Social and Ethical Responsibilities of Computing, which will facilitate the development of responsible “habits of mind and action” for those who create and deploy computing technologies, and the creation of technologies in the public interest.</p> <p>“The launch of the MIT Schwarzman College of Computing offers an extraordinary new opportunity for the MIT community to respond to today’s most consequential questions in ways that serve the common good,” says Melissa Nobles, professor of political science, the Kenan Sahin Dean of the MIT School of Humanities, Arts, and Social Sciences, and co-chair of the Task Force Working Group on Social Implications and Responsibilities of Computing.</p> <p>“As AI and other advanced technologies become ubiquitous in their influence and impact, touching nearly every aspect of life, we have increasingly seen the need to more consciously align powerful new technologies with core human values — integrating consideration of societal and ethical implications of new technologies into the earliest stages of their development. Asking, for example, of every new technology and tool: Who will benefit? What are the potential ecological and social costs? Will the new technology amplify or diminish human accomplishments in the realms of justice, democracy, and personal privacy?</p> <p>“As we shape the college, we are envisioning an MIT culture in which all of us are equipped and encouraged to think about such implications. In that endeavor, MIT’s humanistic disciplines will serve as deep resources for research, insight, and discernment. We also see an opportunity for advanced technologies to help solve political, economic, and social issues that trouble today’s world by integrating technology with a humanistic analysis of complex civilizational issues — among them climate change, the future of work, and poverty, issues that will yield only to collaborative problem-solving. It is not too much to say that human survival may rest on our ability to solve these problems via collective intelligence, designing approaches that call on the whole range of human knowledge.”</p> <p>Julie Shah, an associate professor in the Department of Aeronautics and Astronautics and head of the Interactive Robotics Group at CSAIL, who co-chaired the working group with Nobles and is now a member of the college leadership, adds that “traditional technologists aren’t trained to pause and envision the possible futures of how technology can and will be used. This means that we need to develop new ways of training our students and ourselves in forming new habits of mind and action so that we include these possible futures into our design.”</p> <p>The associate deans of Social and Ethical Responsibilities of Computing, Shah and David Kaiser, the Germeshausen Professor of the History of Science and professor of physics, are designing a systemic framework for SERC that will not only effect change in computing education and research at MIT, but one that will also inform policy and practice in government and industry. Activities that are currently in development include multi-disciplinary curricula embedded in traditional computing and AI courses across all levels of instruction, the commission and curation of a series of case studies that will be modular and available to all via MIT’s open access channels, active learning projects, cross-disciplinary monthly convenings, public forums, and more.&nbsp;</p> <p>“A lot of how we’ve been thinking about SERC components is building capacity with what we already have at the Institute as a very important first step. And that means how do we get people interacting in ways that can be a little bit different than what has been familiar, because I think there are a lot of shared goals among the MIT community, but the gears aren’t quite meshing yet. We want to further support collaborations that might cut across lines that otherwise might not have had much traffic between them,” notes Kaiser.</p> <p><strong>Just the beginning</strong></p> <p>While he’s excited by the progress made so far, Huttenlocher points out there will continue to be revisions made to the organizational structure of the college. “We are at the very beginning of the college, with a tremendous amount of excellence at MIT to build on, and with some clear needs and opportunities, but the landscape is changing rapidly and the college is very much a work in progress.”</p> <p>The college has other initiatives in the planning stages, such as the Center for Advanced Studies of Computing that will host fellows from inside and outside of MIT on semester- or year-long project-oriented programs in focused topic areas that could seed new research, scholarly, educational, or policy work. In addition, Huttenlocher is planning to launch a search for an assistant or associate dean of equity and inclusion, once the Institute Community and Equity Officer is in place, to focus on improving and creating programs and activities that will help broaden participation in computing classes and degree programs, increase the&nbsp;diversity&nbsp;of top faculty candidates in computing fields, and ensure that faculty search and graduate admissions processes have diverse slates of candidates and interviews.</p> <p>“The typical academic approach would be to wait until it’s clear what to do, but that would be a mistake. The way we’re going to learn is by trying and by being more flexible. That may be a more general attribute of the new era we’re living in, he says. “We don’t know what it’s going to look like years from now, but it’s going to be pretty different, and MIT is going to be shaping it.”</p> <p>The MIT Schwarzman College of Computing will be hosting a community forum on Wednesday, Feb. 12 at 2 p.m. in Room 10-250. Members from the MIT community are welcome to attend to learn more about the initial organizational structure of the college.</p> MIT Schwarzman College of Computing leadership team (left to right) David Kaiser, Daniela Rus, Dan Huttenlocher, Julie Shah, and Asu Ozdaglar Photo: Sarah BastilleMIT Schwarzman College of Computing, School of Engineering, Computer Science and Artificial Intelligence Laboratory (CSAIL), Laboratory for Information and Decision Systems (LIDS), Quest for Intelligence, Philosophy, Brain and cognitive sciences, Digital humanities, School of Humanities Arts and Social Sciences, Artificial intelligence, Operations research, Aeronautical and astronautical engineering, Electrical Engineering & Computer Science (eecs), IDSS, Ethics, Administration, Classes and programs Genetic screen offers new drug targets for Huntington’s disease http://news.mit.edu/2020/new-drug-targets-huntingtons-gene-0130 Neuroscientists identify genes that modulate the disease’s toxic effects. Thu, 30 Jan 2020 10:59:59 -0500 Anne Trafton | MIT News Office http://news.mit.edu/2020/new-drug-targets-huntingtons-gene-0130 <p>Using a type of genetic screen that had previously been impossible in the mammalian brain, MIT neuroscientists have identified hundreds of genes that are necessary for neuron survival. They also used the same approach to identify genes that protect against the toxic effects of a mutant protein that causes Huntington’s disease.</p> <p>These efforts yielded at least one promising drug target for Huntington’s: a family of genes that may normally help cells to break down the mutated huntingtin protein before it can aggregate and form the clumps seen in the brains of Huntington’s patients.</p> <p>“These genes had never been linked to Huntington’s disease processes before. When we saw them, that was very exciting because we found not only one gene, but actually several of the same family, and also we saw them have an effect across two models of Huntington’s disease,” says Myriam Heiman, an associate professor of neuroscience in the Department of Brain and Cognitive Sciences and the senior author of <a href="https://www.cell.com/neuron/fulltext/S0896-6273(20)30004-0" target="_blank">the study</a>.</p> <p>The researchers’ new screening technique, which allowed them to assess all of the roughly 22,000 genes found in the mouse brain, could also be applied to other neurological disorders, including Alzheimer’s and Parkinson’s diseases, says Heiman, who is also a member of MIT’s Picower Institute for Learning and Memory and the Broad Institute of MIT and Harvard.</p> <p>Broad Institute postdoc Mary Wertz is the lead author of the paper, which appears today in <em>Neuron</em>.</p> <p><strong>Genome-wide screen</strong></p> <p>For many decades, biologists have been performing screens in which they systematically knock out individual genes in model organisms such as mice, fruit flies, and the worm <em>C. elegans</em>, then observe the effects on cell survival. However, such screens have never been done in the mouse brain. One major reason for this is that delivering the molecular machinery required for these genetic manipulations is more difficult in the brain than elsewhere in the body.</p> <p>“These unbiased genetic screens are very powerful, but the technical difficulty of doing it in the central nervous system at a genome-wide scale has never been overcome,” Heiman says.</p> <p>In recent years, researchers at the Broad Institute have developed libraries of genetic material that can be used to turn off the expression of every gene found in the mouse genome. One of these libraries is based on short hairpin RNA (shRNA), which interferes with the messenger RNA that carries a particular gene’s information. Another makes use of CRISPR, a technique that can disrupt or delete specific genes in a cell. These libraries are delivered by viruses, each of which carry one element that targets a single gene.</p> <p>The libraries were designed so that each of the approximately 22,000 mouse genes is targeted by four or five shRNAs or CRISPR components, so 80,000 to 100,000 viruses need to make it into the brain to ensure that all genes are hit at least once. The MIT team came up with a way to make their solution of viruses highly concentrated, and to inject them directly into the striatum of the brain. Using this approach, they were able to deliver one of the shRNA or CRISPR elements to about 25 percent of all of the cells in the striatum.</p> <p>The researchers focused on the striatum, which is involved in regulating motor control, cognition, and emotion, because it is the brain region most affected by Huntington’s disease. It is also involved in Parkinson’s disease, as well as autism and drug addiction.</p> <p>About seven months after the injection, the researchers sequenced all of the genomic DNA in the targeted striatal neurons. Their approach is based on the idea that if particular genes are necessary for neurons’ survival, any cell with those genes knocked out will die. Then, those shRNAs or CRISPR elements will be found at lower rates in the total population of cells.</p> <p>The study turned up many genes that are necessary for any cell to survive, such as enzymes involved in cell metabolism or copying DNA into RNA. The findings also revealed genes that had been identified in previous studies of fruit flies and worms as being important for neuron function, such as genes involved the function of synapses (structures that allow neurons to communicate with each other).</p> <p>However, a novel finding of this study was the identification of genes that hadn’t been linked to neuron survival before, Heiman says. Many of those were genes that code for metabolic proteins that are essential in cells that burn a lot of energy.</p> <p>“What we interpret this to mean is that neurons in the mammalian brain are much more metabolically active and have a much higher dependency on these processes than for example, a neuron in <em>C. elegans</em>,” Heiman says.</p> <p>William Yang, a professor of psychiatry and biobehavioral sciences at the University of California at Los Angeles, calls the new screening technique “a giant leap forward” for the field of brain research.</p> <p>“Prior to this, people really could study the molecular function of genes gene-by-gene, or maybe a few genes at a time. This is a groundbreaking study because it demonstrates that you can perform genome-wide genetic screening in the mammalian central nervous system,” says Yang, who was not involved in the study.</p> <p><strong>Promising targets</strong></p> <p>The researchers then performed the same type of screen on two different mouse models of Huntington’s disease. These mouse models express the mutated form of the huntingtin protein, which forms clumps in the brains of Huntington’s patients. In this case, the researchers compared the results from the screen of the Huntington’s mice to normal mice. If any of the shRNA or CRISPR elements were found less frequently in the Huntington’s mice, that would suggest that those elements targeted genes that are helping to make cells more resistant to the toxic effects of the huntingtin protein, Heiman says.</p> <p>One promising drug target that emerged from this screen is the Nme gene family, which has previously been linked to cancer metastasis, but not Huntington’s disease. The MIT team found that one of these genes, Nme1, regulates the expression of other genes that are involved in the proper disposal of proteins. The researchers hypothesize that without Nme1, these genes don’t get turned on as highly, allowing huntingtin to accumulate in the brain. They also showed that when Nme1 is overexpressed in the mouse models of Huntington’s, the Huntington’s symptoms appear to improve.</p> <p>Although this gene hasn’t been linked to Huntington’s before, there have already been some efforts to develop compounds that target it, for use in treating cancer, Heiman says.</p> <p>“This is very exciting to us because it’s theoretically a druggable compound,” she says. “If we can increase its activity with a small molecule, perhaps we can replicate the effect of genetic overexpression.”</p> <p>The research was funded by the National Institutes of Health/National Institute of Neurological Disorders and Stroke, the JPB Foundation, the Bev Hartig Huntington’s Disease Foundation, a Fay/Frank Seed Award from the Brain Research Foundation, the Jeptha H. and Emily V. Wade Award, and the Hereditary Disease Foundation.</p> A genome-wide analysis has revealed genes that are essential for neuron survival, as well as genes that protect against the effects of Huntington’s disease.Image courtesy of NIH, edited by MIT NewsResearch, Brain and cognitive sciences, Picower Institute, Broad Institute, School of Science, National Institutes of Health (NIH), Genetics, CRISPR, Neuroscience, Disease, Drug development With these neurons, extinguishing fear is its own reward http://news.mit.edu/2020/with-these-neurons-extinguishing-fear-its-own-reward-0121 The same neurons responsible for encoding reward also form new memories to suppress fearful ones. Tue, 21 Jan 2020 12:40:01 -0500 David Orenstein | Picower Institute http://news.mit.edu/2020/with-these-neurons-extinguishing-fear-its-own-reward-0121 <p>When you expect a really bad experience to happen and then it doesn’t, it’s a distinctly positive feeling. A new study of fear extinction training in mice may suggest why: The findings not only identify the exact population of brain cells that are key for learning not to feel afraid anymore, but also show that these neurons are the same ones that help encode feelings of reward.</p> <p>The study, published Jan. 14 in <em>Neuron</em> by scientists at MIT’s Picower Institute for Learning and Memory, specifically shows that fear extinction memories and feelings of reward alike are stored by neurons that express the gene Ppp1r1b in the posterior of the basolateral amygdala (pBLA), a region known to assign associations of aversive or rewarding feelings, or “valence,” with memories. The study was conducted by Xiangyu Zhang, an MIT graduate student, Joshua Kim, a former graduate student, and Susumu Tonegawa, professor of biology and neuroscience at RIKEN-MIT Laboratory of Neural Circuit Genetics at the Picower Institute for Learning and Memory at MIT and Howard Hughes Medical Institute.</p> <p>“We constantly live at the balance of positive and negative emotion,” Tonegawa says. “We need to have very strong memories of dangerous circumstances in order to avoid similar circumstances to recur. But if we are constantly feeling threatened we can become depressed. You need a way to bring your emotional state back to something more positive.”</p> <p><strong>Overriding fear with reward</strong></p> <p>In a prior study, Kim showed that Ppp1r1b-expressing neurons encode rewarding valence and compete with distinct Rspo2-expressing neurons in the BLA that encode negative valence. In the new study, Zhang, Kim, and Tonegawa set out to determine whether this competitive balance also underlies fear and its extinction.</p> <p>In fear extinction, an original fearful memory is thought to be essentially overwritten by a new memory that is not fearful. In the study, for instance, mice were exposed to little shocks in a chamber, making them freeze due to the formation of fearful memory. But the next day, when the mice were returned to the same chamber for a longer period of time without any further little shocks, freezing gradually dissipated; hence, this treatment is called fear extinction training. The fundamental question, then, is whether the fearful memory is lost or just suppressed by the formation of a new memory during the fear extinction training.</p> <p>While the mice underwent fear extinction training the scientists watched the activity of the different neural populations in the BLA. They saw that Ppp1r1b cells were more active and Rspo2 cells were less active in mice that experienced fear extinction. They also saw that while Rspo2 cells were mostly activated by the shocks and were inhibited during fear extinction, Ppp1r1b cells were mostly active during extinction memory training and retrieval, but were inhibited during the shocks.</p> <p>These and other experiments suggested to the authors that the hypothetical fear extinction memory may be formed in the Ppp1r1b neuronal population, and the team went on to demonstrate this vigorously. For this, they employed the technique previously pioneered in their lab for the identification and manipulation of the neuronal population that holds specific memory information, memory “engram” cells. Zhang labeled Ppp1r1b neurons that were activated during retrieval of fear extinction memory with the light-sensitive protein channelrhodopsin. When these neurons were activated by blue laser light during a second round of fear extinction training, it enhanced and accelerated the extinction. Moreover, when the engram cells were inhibited by another optogenetic technique, fear extinction was impaired because the Ppp1r1b engram neurons could no longer suppress the Rspo2 fear neurons. That allowed the fear memory to regain primacy.</p> <p>These data met the fundamental criteria for the existence of engram cells for fear extinction memory within the pBLA Ppp1r1b cell population: activation and reactivation by recall and enduring and off-line maintenance of the acquired extinction memory.</p> <p>Because Kim had previously shown Ppp1r1b neurons are activated by rewards and drive appetitive behavior and memory, the team sequentially tracked Ppp1r1b cell activity in mice that eagerly received a water reward followed by a food reward followed by fear extinction training and fear extinction memory retrieval. The overlap of Ppp1r1b neurons activated by fear extinction versus water reward was as high as the overlap of neurons activated by water versus food reward. And finally, artificial optogenetic activation of Ppp1r1b extinction memory engram cells was as effective as optogenetic activation of Ppp1r1b water reward-activated neurons in driving appetitive behaviors. Reciprocally, artificial optogenetic activation of water-responding Ppp1r1b neurons enhanced fear extinction training as efficiently as optogenetic activation of fear extinction memory engram cells. These results demonstrate that fear extinction is equivalent to bona fide rewards and therefore provide the neuroscientific basis for the widely held experience in daily life: omission of expected punishment is a reward.</p> <p><strong>What next?</strong></p> <p>By establishing this intimate connection between fear extinction and reward, and by identifying a genetically defined neuronal population (Ppp1r1b) that plays a crucial role in fear extinction, this study provides potential therapeutic targets for treating fear disorders like post-traumatic stress disorder and anxiety, Zhang says.</p> <p>From the basic scientific point of view, Tonegawa says, how fear extinction training specifically activates Ppp1r1b neurons would be an important question to address. More imaginatively, results showing how Ppp1r1b neurons override Rspo2 neurons in fear extinction raises an intriguing question about whether a reciprocal dynamic might also occur in the brain and behavior. Investigating “joy extinction” via these mechanisms might be an interesting research topic.</p> <p>The research was supported by the RIKEN Brain Science Institute, the Howard Hughes Medical Institute, and the JPB Foundation.</p> The same neurons that store feelings of reward also store memories that suppress fearful ones, a new study shows. In this image, the broader population of Ppp1r1b neurons is labeled green while neurons storing a specific fear extinction memory are labeled red.Image: Tonegawa Lab/Picower InstitutePicower Institute, Biology, Brain and cognitive sciences, Memory, Anxiety, Neuroscience, Research, School of Science, Mental health, Learning “She” goes missing from presidential language http://news.mit.edu/2020/she-missing-presidential-language-0108 Even when people believed Hillary Clinton would win the 2016 election, they did not use “she” to refer to the next president. Wed, 08 Jan 2020 01:00:20 -0500 Anne Trafton | MIT News Office http://news.mit.edu/2020/she-missing-presidential-language-0108 <p>Throughout most of 2016, a significant percentage of the American public believed that the winner of the November 2016 presidential election would be a woman — Hillary Clinton.</p> <p>Strikingly, a new study from cognitive scientists and linguists at MIT, the University of Potsdam, and the University of California at San Diego shows that despite those beliefs, people rarely used the pronoun “she” when referring to the next U.S. president before the election. Furthermore, when reading about the future president, encountering the pronoun “she” caused a significant stumble in their reading.</p> <p>“There seemed to be a real bias against referring to the next president as ‘she.’ This was true even for people who most strongly expected and probably wanted the next president to be a female,” says Roger Levy, an MIT professor of brain and cognitive sciences and the senior author of the new study. “There’s a systematic underuse of ‘she’ pronouns for these kinds of contexts. It was quite eye-opening.”</p> <p>As part of their study, Levy and his colleagues also conducted similar experiments in the lead-up to the 2017 general election in the United Kingdom, which determined the next prime minister. In that case, people were more likely to use the pronoun “she” than “he” when referring to the next prime minister.</p> <p>Levy suggests that sociopolitical context may account for at least some of the differences seen between the U.S. and the U.K.: At the time, Theresa May was prime minister and very strongly expected to win, plus many Britons likely remember the long tenure of former Prime Minister Margaret Thatcher.</p> <p>“The situation was very different there because there was an incumbent who was a woman, and there is a history of referring to the prime minister as ‘she’ and thinking about the prime minster as potentially a woman,” he says.</p> <p>The lead author of the study is Titus von der Malsburg, a research affiliate at MIT and a researcher in the Department of Linguistics at the University of Potsdam, Germany. Till Poppels, a graduate student at the University of California at San Diego, is also an author of the paper, which appears in the journal <em>Psychological Science</em>.</p> <p><strong>Implicit linguistic biases</strong></p> <p>Levy and his colleagues began their study in early 2016, planning to investigate how people’s expectations about world events, specifically, the prospect of a woman being elected president, would influence their use of language. They hypothesized that the strong possibility of a female president might override the implicit bias people have toward referring to the president as “he.”</p> <p>“We wanted to use the 2016 electoral campaign as a natural experiment, to look at what kind of language people would produce or expect to hear as their expectations about who was likely to win the race changed,” Levy says.</p> <p>Before beginning the study, he expected that people’s use of the pronoun “she” would go up or down based on their beliefs about who would win the election. He planned to explore how long would it take for changes in pronoun use to appear, and how much of a boost “she” usage would experience if a majority of people expected the next president to be a woman.</p> <p>However, such a boost never materialized, even though Clinton was expected to win the election.</p> <p>The researchers performed their experiment 12 times between June 2016 and January 2017, with a total of nearly 25,000 participants from the Amazon Mechanical Turk platform. The study included three tasks, and each participant was asked to perform one of them. The first task was to predict the likelihood of three candidates winning the election — Clinton, Donald Trump, or Bernie Sanders. From those numbers, the researchers could estimate the percentage of people who believed the next president would be a woman. This number was higher than 50 percent during most of the period leading up to the election, and reached just over 60 percent right before the election.</p> <p>The next two tasks were based on common linguistics research methods — one to test people’s patterns of language production, and the other to test how the words they encounter affect their reading comprehension.</p> <p>To test language production, the researchers asked participants to complete a paragraph such as “The next U.S. president will be sworn into office in January 2017. After moving into the Oval Office, one of the first things that ….”</p> <p>In this task, about 40 percent of the participants ended up using a pronoun in their text. Early in the study period, more than 25 percent of those participants used “he,” fewer than 10 percent used “she,” and around 50 percent used “they.” As the election got closer, and Clinton’s victory seemed more likely, the percentage of “she” usage never went up, but usage of “they” climbed to about 60 percent. While these results indicate that the singular “they” has reached widespread acceptance as a de facto standard in contemporary English, they also suggest a strong persistent bias against using “she” in a context where the gender of the individual referred to is not yet known.</p> <p>“After Clinton won the primary, by late summer, most people thought that she would win. Certainly Democrats, and especially female Democrats, thought that Clinton would win. But even in these groups, people were very reluctant to use ‘she’ to refer to the next president. It was never the case that ‘she’ was preferred over ‘he,’” Levy says.</p> <p>For the third task, participants were asked to read a short passage about the next president. As the participants read the text on a screen, they had to press a button to reveal each word of the sentence. This setup allows the researchers to measure how quickly participants are reading. Surprise or difficulty in comprehension leads to longer reading times.</p> <p>In this case, the researchers found that when participants encountered the pronoun “she” in a sentence referring to the next president, it cost them about a third of a second in reading time — a seemingly short amount of time that is nevertheless known from sentence processing research to indicate a substantial disruption relative to ordinary reading — compared to sentences that used “he.” This did not change over the course of the study.</p> <p>“For months, we were in a situation where large segments of the population strongly expected that a woman would win, yet those segments of the population actually didn’t use the word ‘she’ to refer to the next president, and were surprised to encounter ‘she’ references to the next president,” Levy says.</p> <p><strong>Strong stereotypes</strong></p> <p>The findings suggest that gender biases regarding the presidency are so deeply ingrained that they are extremely difficult to overcome even when people strongly believe that the next president will be a woman, Levy says.</p> <p>“It was surprising that the stereotype that the U.S. president is always a man would so strongly influence language, even in this case, which offered the best possible circumstances for particularized knowledge about an upcoming event to override the stereotypes,” he says. “Perhaps it’s an association of different pronouns with positions of prestige and power, or it’s simply an overall reluctance to refer to people in a way that indicates they’re female if you’re not sure.”</p> <p>The U.K. component of the study was conducted in June 2017 (before the election) and July 2017 (after the election but before Theresa May had successfully formed a government). Before the election, the researchers found that “she” was used about 25 percent of the time, while “he” was used less than 5 percent of the time. However, reading times for sentences referring to the prime minister as “she” were no faster than than those for “he,” suggesting that there was still some bias against “she” in comprehension relative to usage preferences, even in a country that already has a woman prime minister.</p> <p>The type of gender bias seen in this study appears to extend beyond previously seen stereotypes that are based on demographic patterns, Levy says. For example, people usually refer to nurses as “she,” even if they don’t know the nurse’s gender, and more than 80 percent of nurses in the U.S. are female. In an ongoing study, von der Malsburg, Poppels, Levy, and recent MIT graduate Veronica Boyce have found that even for professions that have fairly equal representation of men and women, such as baker, “she” pronouns are underused.</p> <p>“If you ask people how likely a baker is to be male or female, it’s about 50/50. But if you ask people to complete text passages that are about bakers, people are twice as likely to use he as she,” Levy says. “Embedded within the way that we use pronouns to talk about individuals whose identities we don’t know yet, or whose identities may not be definitive, there seems to be this systematic underconveyance of expectations for female gender.”</p> <p>The research was funded by the National Institutes of Health, a Feodor Lynen Research Fellowship from the Alexander von Humboldt Foundation, and an Alfred P. Sloan Fellowship.</p> A new study reveals that although a significant percentage of Americans believed Hillary Clinton would win the 2016 presidential election, people rarely used the pronoun “she” when referring to the next president.Image: MIT NewsResearch, Brain and cognitive sciences, Linguistics, School of Science, Women, Behavior, Language, Politics, National Institutes of Health (NIH) School of Science recognizes members with 2020 Infinite Kilometer Awards http://news.mit.edu/2020/school-science-recognizes-members-infinite-kilometer-awards-0103 Four members of the School of Science honored for contributions to the Institute. Fri, 03 Jan 2020 10:30:01 -0500 School of Science http://news.mit.edu/2020/school-science-recognizes-members-infinite-kilometer-awards-0103 <p>The MIT <a href="https://science.mit.edu/">School of Science</a> has announced the winners of the 2020 Infinite Kilometer Awards, which are presented annually to researchers within the school who are exceptional contributors to their communities.</p> <p>These winners are nominated by their peers and mentors for their hard work, which can include mentoring and advising, supporting educational programs, providing service to groups such as the MIT Postdoctoral Association, or some other form of contribution to their home departments, labs, and research centers, the school, and the Institute.</p> <p>The 2020 Infinite Kilometer Award winners in the School of Science are:</p> <ul> <li><a href="https://math.mit.edu/~edgarc/" target="_blank">Edgar Costa</a>, a research scientist in the Department of Mathematics, nominated by Professor Bjorn Poonen and Principal Research Scientist Andrew Sutherland;</li> <li><a href="http://math.mit.edu/~caseyrod/" target="_blank">Casey Rodriguez</a>, an instructor in the Department of Mathematics, nominated by Professor Gigliola Staffilani;</li> <li><a href="http://web.mit.edu/ryskin/www/" target="_blank">Rachel Ryskin</a>, a postdoc in the Department of Brain and Cognitive Sciences, nominated by Professor Edward Gibson; and</li> <li><a href="https://bcs.mit.edu/users/gsipemitedu" target="_blank">Grayson Sipe</a>, a postdoc in the Picower Institute for Learning and Memory, nominated by Professor Mriganka Sur.</li> </ul> <p>A monetary award is granted to recipients, and a celebratory reception will be held later this spring in their honor, attended by those who nominated them, family, and friends, in addition to the soon-to-be-announced recipients of the 2020 Infinite Mile Award.</p> School of Science, Mathematics, Brain and cognitive sciences, Picower Institute, Awards, honors and fellowships, Graduate, postdoctoral, Staff, Community Study may explain how infections reduce autism symptoms http://news.mit.edu/2019/explain-infections-fever-reduce-autism-1218 An immune molecule sometimes produced during infection can influence the social behavior of mice. Wed, 18 Dec 2019 13:00:00 -0500 Anne Trafton | MIT News Office http://news.mit.edu/2019/explain-infections-fever-reduce-autism-1218 <p>For many years, some parents have noticed that their autistic children’s behavioral symptoms diminished when they had a fever. This phenomenon has been documented in at least two large-scale studies over the past 15 years, but it was unclear why fever would have such an effect.</p> <p>A new study from MIT and Harvard Medical School sheds light on the cellular mechanisms that may underlie this phenomenon. In a study of mice, the researchers found that in some cases of infection, an immune molecule called IL-17a is released and suppresses a small region of the brain’s cortex that has previously been linked to social behavioral deficits in mice.</p> <p>“People have seen this phenomenon before [in people with autism], but it’s the kind of story that is hard to believe, which I think stems from the fact that we did not know the mechanism,” says Gloria Choi, the Samuel A. Goldblith Career Development Assistant Professor of Applied Biology and an assistant professor of brain and cognitive sciences at MIT. “Now the field, including my lab, is trying hard to show how this works, all the way from the immune cells and molecules to receptors in the brain, and how those interactions lead to behavioral changes.”</p> <p>Although findings in mice do not always translate into human treatments, the study may help to guide the development of strategies that could help to reduce some behavioral symptoms of autism or other neurological disorders, says Choi, who is also a member of MIT’s Picower Institute for Learning and Memory.</p> <p>Choi and Jun Huh, an assistant professor of immunology at Harvard Medical School, are the senior authors of <a href="https://www.nature.com/articles/s41586-019-1843-6" target="_blank">the study</a>, which appears in <em>Nature</em> today. The lead authors of the paper are MIT graduate student Michael Douglas Reed and MIT postdoc Yeong Shin Yim.</p> <p><strong>Immune influence</strong></p> <p>Choi and Huh have previously explored other links between inflammation and autism. In 2016, <a href="http://news.mit.edu/2016/maternal-inflammation-autism-behavior-0128">they showed</a> that mice born to mothers who experience a severe infection during pregnancy are much more likely to show behavioral symptoms such as deficits in sociability, repetitive behaviors, and abnormal communication. They found that this is caused by exposure to maternal IL-17a, which produces defects in a specific brain region of the developing embryos. This brain region, S1DZ, is part of the somatosensory cortex and is believed to be responsible for sensing where the body is in space.</p> <p>“Immune activation in the mother leads to very particular cortical defects, and those defects are responsible for inducing abnormal behaviors in offspring,” Choi says.</p> <p>A link between infection during pregnancy and autism in children has also been seen in humans. A 2010 study that included all children born in Denmark between 1980 and 2005 found that severe viral infections during the first trimester of pregnancy translated to a threefold increase in risk for autism, and serious bacterial infections during the second trimester were linked with a 1.42-fold increase in risk. These infections included influenza, viral gastroenteritis, and severe urinary tract infections.</p> <p>In the new study, Choi and Huh turned their attention to the often-reported link between fever and reduction of autism symptoms.</p> <p>“We wanted to ask whether we could use mouse models of neurodevelopmental disorders to recapitulate this phenomenon,” Choi says. “Once you see the phenomenon in animals, you can probe the mechanism.”</p> <p>The researchers began by studying mice that exhibited behavioral symptoms due to exposure to inflammation during gestation. They injected these mice with a bacterial component called LPS, which induces a fever response, and found that the animals’ social interactions were temporarily restored to normal.</p> <p>Further experiments revealed that during inflammation, these mice produce IL-17a, which binds to receptors in S1DZ — the same brain region originally affected by maternal inflammation. IL-17a reduces neural activity in S1DZ, which makes the mice temporarily more interested in interacting with other mice.</p> <p>If the researchers inhibited IL-17a or knocked out the receptors for IL-17a, this symptom reversal did not occur. They also showed that simply raising the mice’s body temperature did not have any effect on behavior, offering further evidence that IL-17a is necessary for the reversal of symptoms.</p> <p>“This suggests that the immune system uses molecules like IL-17a to directly talk to the brain, and it actually can work almost like a neuromodulator to bring about these behavioral changes,” Choi says. “Our study provides another example as to how the brain can be modulated by the immune system.”</p> <p>“What’s remarkable about this paper is that it shows that this effect on behavior is not necessarily a result of fever but the result of cytokines being made,” says Dan Littman, a professor of immunology at New York University, who was not involved in the study. “There’s a growing body of evidence that the central nervous system, in mammals at least, has evolved to be dependent to some degree on cytokine signaling at various times during development or postnatally.”</p> <p><strong>Behavioral effects</strong></p> <p>The researchers then performed the same experiments in three additional mouse models of neurological disorders. These mice lack a gene linked to autism and similar disorders — either Shank3, Cntnap2, or Fmr1. These mice all show deficits in social behavior similar to those of mice exposed to inflammation in the womb, even though the origin of their symptoms is different.</p> <p>Injecting those mice with LPS did produce inflammation, but it did not have any effect on their behavior. The reason for that, the researchers found, is that in these mice, inflammation did not stimulate IL-17a production. However, if the researchers injected IL-17a into these mice, their behavioral symptoms did improve.</p> <p>This suggests that mice who are exposed to inflammation during gestation end up with their immune systems somehow primed to more readily produce IL-17a during subsequent infections. Choi and Huh have <a href="http://news.mit.edu/2017/studies-explain-link-between-autism-severe-infection-during-pregnancy-0913">previously shown</a> that the presence of certain bacteria in the gut can also prime IL-17a responses. They are now investigating whether the same gut-residing bacteria contribute to the LPS-induced reversal of social behavior symptoms that they found in the new <em>Nature</em> study.</p> <p>“It was amazing to discover that the same immune molecule, IL-17a, could have dramatically opposite effects depending on context: Promoting autism-like behaviors when it acts on the developing fetal brain and ameliorating autism-like behaviors when it modulates neural activity in the adult mouse brain. This is the degree of complexity we are trying to make sense of,” Huh says.</p> <p>Choi’s lab is also exploring whether any immune molecules other than IL-17a may affect the brain and behavior.</p> <p>“What’s fascinating about this communication is the immune system directly sends its messengers to the brain, where they work as if they’re brain molecules, to change how the circuits work and how the behaviors are shaped,” Choi says.</p> <p>The research was funded by the Jeongho Kim Neurodevelopmental Research Fund, Perry Ha, the Hock E. Tan and K. Lisa Yang Center for Autism Research, the Simons Center for the Social Brain, the Simons Foundation Autism Research Initiative, the Champions of the Brain Weedon Fellowship, and a National Science Foundation Graduate Research Fellowship.</p> MIT and Harvard Medical School researchers have uncovered a cellular mechanism that may explain why some children with autism experience a temporary reduction in behavioral symptoms when they have a fever.Research, Brain and cognitive sciences, Picower Institute, School of Science, Autism, National Science Foundation (NSF) Study probing visual memory and amblyopia unveils many-layered mystery http://news.mit.edu/2019/study-probing-visual-memory-amblyopia-unveils-many-layered-mystery-1217 Scientists pinpoint the role of a receptor in vision degradation in amblyopia. Tue, 17 Dec 2019 15:40:01 -0500 David Orenstein | Picower Institute http://news.mit.edu/2019/study-probing-visual-memory-amblyopia-unveils-many-layered-mystery-1217 <div> <div> <div> <div> <p>In decades of studying how neural circuits in the brain’s visual cortex adapt to experience, MIT Professor Mark Bear’s lab has followed the science wherever it has led. This approach has yielded the discovery of cellular mechanisms serving visual recognition memory, in which the brain learns what sights are familiar so it can focus on what’s new, as well as a potential therapy for <a href="https://picower.mit.edu/innovations-inventions/treating-amblyopia">amblyopia</a>, a disorder where children born with disrupted vision in one eye can lose visual acuity there permanently without intervention. But this time, his lab’s latest investigation has yielded surprising new layers of mystery.</p> </div> </div> </div> </div> <div> <div> <div> <div> <div> <div> <div> <div> <div> <div> <div> <p>Heading into the experiments described in a new paper in <em>Cerebral Cortex, </em>Bear and his team expected to confirm that key proteins called NMDA receptors act specifically in neurons in layer 4 of the visual cortex to make the circuit connection changes, or “plasticity,” necessary for both visual recognition memory and amblyopia. Instead, the study has produced unexpectedly divergent results.</p> <p>“There are two stories here,” says Bear, who is a co-senior author and the Picower Professor of Neuroscience in the Picower Institute for Learning and Memory. “One is that we have further pinpointed where to look for the root causes of amblyopia. The other is that we have now completely blown up what we thought was happening in stimulus-selective response potentiation, or SRP, the synaptic change believed to give rise to visual recognition memory.”</p> <p>The cortex is built like a stack of pancakes, with distinct layers of cells serving different functions. Layer 4 is considered to be the primary “input layer” that receives relatively unprocessed information from each eye. Plasticity that is restricted to one eye has been assumed to occur at this early stage of cortical processing, before the information from the two eyes becomes mixed. However, while the evidence demonstrates that NMDA receptors in layer 4 neurons are indeed necessary for the degradation of vision in a deprived eye, they apparently play no role in how neural connections, or synapses, serving the uncompromised eye strengthen to compensate, and similarly don’t matter for the development of SRP. That’s even though NMDA receptors in visual cortex neurons have directly been shown to matter in these phenomena before, and layer 4 neurons are known to participate in these circuits via telltale changes in electrical activity.</p> <p>“These findings reveal two key things,” says Samuel Cooke, co-senior author and a former member of the Bear Lab who now has his own at King’s College London. “First, that the neocortical circuits modified to enhance cortical responses to sensory inputs during deprivation, or to stimuli that have become familiar, reside elsewhere in neocortex, revealing a complexity that we had not previously appreciated. Second, the results show that effects can be clearly manifest in a region of the brain that are actually echoes of plasticity occurring elsewhere, thereby illustrating the importance of not only observing biological phenomena, but also understanding their origins by locally disrupting known underlying mechanisms.”</p> <p>To perform the study, Bear Lab postdoc and lead author Ming-fai Fong used a genetic technique to specifically knock out NMDA receptors in excitatory neurons in layer 4 of the visual cortex of mice. Armed with that tool, she could then investigate the consequences for visual recognition memory and “monocular deprivation,” a lab model for amblyopia in which one eye is temporarily closed early in life. The hypothesis was that knocking out the NMDA receptor in these cells in layer 4 would prevent SRP from taking hold amid repeated presentations of the same stimulus, and would prevent the degradation of vision in a deprived eye, as well as the commensurate strengthening of the unaffected eye.</p> <p>“We were gratified to note that the amblyopia-like effect of losing cortical vision as a result of closing an eye for several days in early life was completely prevented,” Cooke says. “However, we were stunned to find that the two enhancing forms of plasticity remained completely intact.”</p> <p>Fong says that with continued work, the lab hopes to pinpoint where in the circuit NMDA receptors are triggering SRP, and the compensatory increase in strength in a non-deprived eye, after monocular deprivation. Doing so, she says, could have clinical implications.</p> <p>“Our study identified a crucial component in the visual cortical circuit that mediates the plasticity underlying amblyopia,” she says.&nbsp;“This study also highlights the ongoing need to identify the distinct components in the visual cortical circuit that mediate visual enhancement, which could be important both in developing treatments for visual disability as well as developing biomarkers for neurodevelopmental disorders. These efforts are ongoing in the lab.”</p> <p>The search now moves to other layers, Bear said, including layer 6, which also receives unprocessed input from each eye.</p> <p>“Clearly, this is not the end of the story,” Bear says.</p> <p>In addition to Fong, Bear, and Cooke, the paper’s other authors are Peter Finnie, Taekeun Kim, Aurore Thomazeau, and Eitan Kaplan.</p> <p>The National Eye Institute and the JPB Foundation funded the study.</p> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> The visual cortex, where the brain processes visual input, is made like a stack of pancakes. In a new study, scientists sought to determine the role in several visual phenomena of a receptor on neurons in layer 4. Photo: Bear Lab/Picower InstitutePicower Institute, Brain and cognitive sciences, School of Science, Neuroscience, Vision, Genetics, Memory, Research Supporting students in Puerto Rico after a hurricane’s devastation http://news.mit.edu/2019/supporting-puerto-rico-students-hector-de-jesus-cortes-1213 Postdoc Héctor De Jesús-Cortés works to build up the STEM pipeline from his homeland to MIT and beyond. Fri, 13 Dec 2019 00:00:00 -0500 Fernanda Ferreira | School of Science http://news.mit.edu/2019/supporting-puerto-rico-students-hector-de-jesus-cortes-1213 <p>When Hurricane Maria hit Puerto Rico in September 2017, Héctor De Jesús-Cortés was vacationing on the island with his wife, Edmarie Guzmán-Vélez. “Worst vacation ever, but it actually turned out to be the most important in my life,” says De Jesús-Cortés. In the days immediately after the hurricane, both focused on helping their families get their bearings; after that first week, however, they were itching to do more. That itch would take them to San Juan, Puerto Rico’s capital, where they asked the then-secretary of education a simple question: “How can we help?”</p> <p>With De Jesús-Cortés’ PhD in neuroscience and Guzmán-Vélez’s PhD in clinical psychology, they soon became involved in an effort led by the Department of Education to help students and school staff, as well as the community at large, troubled by the hurricane. “Everyone was traumatized, so if you bring kids to teachers who are also traumatized, that’s a bad recipe,” explains De Jesús-Cortés.</p> <p>De Jesús-Cortés and Guzmán-Vélez connected with their friend Rosaura Orengo-Aguayo, a clinical psychologist and assistant professor at the Medical University of South Carolina who studies traumatic stress and Hispanic populations. Working together with the Department of Education and the U.S. Department of Health and Human Services, they developed a program to address trauma in schools. The Esperanza, or “promise,” program is ongoing and has already trained hundreds of school staff members on how to manage trauma and anxiety, and to identify these manifestations in students. &nbsp;</p> <p>Back in Boston, De Jesús-Cortés has continued his efforts for Puerto Rico, raising funds for micro-entrepreneurs and teaching neuroscience in online classes for undergraduates on the island. Each effort is guided by that same simple question — How can we help? His latest effort along with Guzmán-Vélez is a precollege summer program at MIT that will give Puerto Rican students a taste for scientific research. &nbsp;</p> <p><strong>A sense of possibility</strong></p> <p>For De Jesús-Cortés, teaching is more than just a transfer of knowledge. “I see teaching as mentorship,” he says. “I want students to be exposed to opportunities, because growing up in Puerto Rico, I know how difficult it can be for some students to get those opportunities.”</p> <p>While De Jesús-Cortés was an undergraduate at the University of Puerto Rico, he participated in Minority Access for Research Careers (MARC), a National Institutes of Health-funded program that supports underrepresented minority undergraduates as they move toward careers in biomedical sciences. “We had workshops every month about applications; they would bring recruiters, and they would also pay for summer internships,” explains De Jesús-Cortés.</p> <p>MARC allowed De Jesús-Cortés to see a career in science as a possibility, and he envisions that the summer school, whose inaugural class will be in summer 2020, will do something similar. “The idea is to have kids first spend two weeks in Puerto Rico and expose them to research at the undergraduate level,” explains De Jesús-Cortés. The students will be at the Universidad del Sagrado Corazón in Puerto Rico; the university has partnered with De Jesús-Cortés on the project. “Then they travel to Boston and see what research is happening here.” The 15-20 students will spend two weeks in Massachusetts, living in the MIT dorms, visiting labs, and learning how to apply to colleges in the United States.</p> <p>The MARC program also gave De Jesús-Cortés a community. “To this day, I talk to my MARC fellows,” he says, and that’s something he hopes to replicate with the summer students. “Each student will have a mentor, and I want them to keep talking after the program,” De Jesús-Cortés says.</p> <p>The summer school will not just give students a taste of scientific research, it will also show that universities like MIT are within their reach. “I was born and raised in Puerto Rico, and my schools didn't have the best resources in STEM,” De Jesús-Cortés says. He hopes that, by seeing researchers in Greater Boston that have the same background, the summer students will see MIT and a career in science as a possibility. “Students need to be exposed to mentors and role models that prove that it can be done,” he says.</p> <p><strong>Fixing vision</strong></p> <p>De Jesús-Cortés works on the summer school, and his other efforts for Puerto Rico and the Latino community, in addition to his neuroscience research. As a postdoc in the lab of Mark Bear, the Picower Professor of Neuroscience, he’s trying to use electrophysiology to figure out when neurons in the brain need a little help to communicate.<br /> &nbsp;<br /> Neurons communicate with one another using both chemical and electrical activity. An action potential, which is electrical, travels down the arms of the neuron, but when it reaches the end of that arm, the synapse, the communication becomes chemical. The electrical signal stimulates the release of neurotransmitters, which reach across the gap between two neurons, stimulating the neighboring neuron to make its own action potential.<br /> Not every neuron is equally capable of producing action potentials. “In a neurodegenerative disorder, before the neuron dies, it’s sick,” says De Jesús-Cortés. “And if it’s sick, it’s not going to communicate electrically very well.” De Jesús-Cortés wants to use this diminished electrical activity as a biomarker for disorders in the brain. “If I can detect that diminished activity with an electrode, then I can intervene with a pharmacological agent that will prevent the death of neurons,” he explains.</p> <p>To test this, De Jesús-Cortés is focusing on amblyopia, a condition more commonly known as lazy eye. Lazy eye happens when the communication between the visual cortex — a region in the back of the brain where visual information is received and processed — and one of the eyes is impaired, resulting in blurred vision. Electrical activity in the visual cortex that corresponds to the lazy eye is also down, and De Jesús-Cortés can detect that decreased activity using electrodes. &nbsp;</p> <p>When amblyopia is caught early on, a combination of surgery and an eye patch can strengthen the once-lazy eye, getting rid of the blurriness. “But, if you catch that condition after 8 years old, the patching doesn’t work as well,” says De Jesús-Cortés. Another postdoc in the Bear Lab, Ming-fai Fong, figured out that tetrodotoxin, which is found in puffer fish, is able to reboot the lazy eye, bringing up electrical activity in the visual cortex and giving mice with amblyopia perfect vision mere hours after receiving a drop of the toxin.</p> <p>But we don’t actually know how tetrodotoxin is doing this on a molecular level. “Now, putting tetrodotoxin in humans will be a little bit difficult,” says De Jesús-Cortés. Add too much toxin and you could cause a number of new problems. He is investigating what exactly the toxin is doing to sick neurons. Using that information, he then wants to design alternative treatments that have the same or even better effect: “Find neurons that are quiet because they are sick, and reboot them with a pharmacological agent,” he says.</p> <p>In the future, De Jesús-Cortés wants to look beyond the visual cortex, at other regions of the brain and other conditions like Parkinson, Alzheimer’s, and autism, finding the hurting neurons and giving them a boost.<br /> In both his neuroscience research and his work for Puerto Rico, De Jesús-Cortés is passionate about finding ways to help. But he has also learned that for all these efforts to succeed, he needs to accept help as well. “When you are working on so many projects at the same time, you need a lot of different people that believe in your vision,” he says. “And if you’re helping them, you believe in their vision.” For De Jesús-Cortés, this reciprocity is one of the most important aspects of his work, and it’s a guiding principle in his research and life. “I believe in collaboration like nothing else."</p> At MIT, Héctor De Jesús-Cortés studies neuronal electrical activity underlying diseases such as amblyopia, or lazy eye.Photo: Steph StevensPicower Institute, Brain and cognitive sciences, School of Science, Diversity and inclusion, Research, Profile, graduate, Graduate, postdoctoral, Natural disasters, Latin America, Education, teaching, academics Differences between deep neural networks and human perception http://news.mit.edu/2019/differences-between-deep-neural-networks-and-human-perception-1212 Stimuli that sound or look like gibberish to humans are indistinguishable from naturalistic stimuli to deep networks. Thu, 12 Dec 2019 13:05:01 -0500 Kenneth I. Blum | Center for Brains, Minds and Machines http://news.mit.edu/2019/differences-between-deep-neural-networks-and-human-perception-1212 <p>When your mother calls your name, you know it’s her voice — no matter the volume, even over a poor cell phone connection. And when you see her face, you know it’s hers — if she is far away, if the lighting is poor, or if you are on a bad FaceTime call. This robustness to variation is a hallmark of human perception. On the other hand, we are susceptible to illusions: We might fail to distinguish between sounds or images that are, in fact, different. Scientists have explained many of these illusions, but we lack a full understanding of the invariances in our auditory and visual systems.
</p> <p>Deep neural networks also have performed speech recognition and image classification tasks with impressive robustness to variations in the auditory or visual stimuli. But are the invariances learned by these models similar to the invariances learned by human perceptual systems? A group of MIT researchers has discovered that they are different. They presented their findings yesterday at the 2019 <a href="https://nips.cc/">Conference on Neural Information Processing Systems</a>.
</p> <p>The researchers made a novel generalization of a classical concept: “metamers” — physically distinct stimuli that generate the same perceptual effect. The most famous examples of metamer stimuli arise because most people have three different types of cones in their retinae, which are responsible for color vision. The perceived color of any single wavelength of light can be matched exactly by a particular combination of three lights of different colors — for example, red, green, and blue lights. Nineteenth-century scientists inferred from this observation that humans have three different types of bright-light detectors in our eyes. This is the basis for electronic color displays on all of the screens we stare at every day. Another example in the visual system is that when we fix our gaze on an object, we may perceive surrounding visual scenes that differ at the periphery as identical. In the auditory domain, something analogous can be observed. For example, the “textural” sound of two swarms of insects might be indistinguishable, despite differing in the acoustic details that compose them, because they have similar aggregate statistical properties. In each case, the metamers provide insight into the mechanisms of perception, and constrain models of the human visual or auditory systems.
</p> <p>In the current work, the researchers randomly chose natural images and sound clips of spoken words from standard databases, and then synthesized sounds and images so that deep neural networks would sort them into the same classes as their natural counterparts. That is, they generated physically distinct stimuli that are classified identically by models, rather than by humans. This is a new way to think about metamers, generalizing the concept to swap the role of computer models for human perceivers. They therefore called these synthesized stimuli “model metamers” of the paired natural stimuli. The researchers then tested whether humans could identify the words and images.
</p> <p>“Participants heard a short segment of speech and had to identify from a list of words which word was in the middle of the clip. For the natural audio this task is easy, but for many of the model metamers humans had a hard time recognizing the sound,” explains first-author Jenelle Feather, a graduate student in the <a href="http://bcs.mit.edu/" target="_blank">MIT Department of Brain and Cognitive Sciences</a> (BCS) and a member of the <a href="https://cbmm.mit.edu/" target="_blank">Center for Brains, Minds, and Machines</a> (CBMM). That is, humans would not put the synthetic stimuli in the same class as the spoken word “bird” or the image of a bird. In fact, model metamers generated to match the responses of the deepest layers of the model were generally unrecognizable as words or images by human subjects.
</p> <p><a href="https://cbmm.mit.edu/about/people/mcdermott">Josh McDermott</a>, associate professor in BCS and investigator in CBMM, makes the following case: “The basic logic is that if we have a good model of human perception, say of speech recognition, then if we pick two sounds that the model says are the same and present these two sounds to a human listener, that human should also say that the two sounds are the same. If the human listener instead perceives the stimuli to be different, this is a clear indication that the representations in our model do not match those of human perception.”
</p> <p>Examples of the model metamer stimuli can be found in the video below.</p> <div class="cms-placeholder-content-video"></div> <p>Joining Feather and McDermott on the paper are Alex Durango, a post-baccalaureate student, and Ray Gonzalez, a research assistant, both in BCS.
</p> <p>There is another type of failure of deep networks that has received a lot of attention in the media: adversarial examples (see, for example, "<a href="http://news.mit.edu/2019/why-did-my-classifier-mistake-turtle-for-rifle-computer-vision-0731">Why did my classifier just mistake a turtle for a rifle?</a>"). These are stimuli that appear similar to humans but are misclassified by a model network (by design — they are constructed to be misclassified). They are complementary to the stimuli generated by Feather's group, which sound or appear different to humans but are designed to be co-classified by the model network. The vulnerabilities of model networks exposed to adversarial attacks are well-known — face-recognition software might mistake identities; automated vehicles might not recognize pedestrians.
</p> <p>The importance of this work lies in improving models of perception beyond deep networks. Although the standard adversarial examples indicate differences between deep networks and human perceptual systems, the new stimuli generated by the McDermott group arguably represent a more fundamental model failure — they show that generic examples of stimuli classified as the same by a deep network produce wildly different percepts for humans.
</p> <p>The team also figured out ways to modify the model networks to yield metamers that were more plausible sounds and images to humans. As McDermott says, “This gives us hope that we may be able to eventually develop models that pass the metamer test and better capture human invariances.”
</p> <p>“Model metamers demonstrate a significant failure of present-day neural networks to match the invariances in the human visual and auditory systems,” says Feather, “We hope that this work will provide a useful behavioral measuring stick to improve model representations and create better models of human sensory systems.”
</p> Associate Professor Josh McDermott (left) and graduate student Jenelle Feather generated physically distinct stimuli that are classified identically by models, rather than by humans.Photos: Justin Knight and Kris BrewerBrain and cognitive sciences, Center for Brains Minds and Machines, McGovern Institute, Research, Machine learning, Artificial intelligence, School of Science, Neuroscience Fueled by the power of stories http://news.mit.edu/2019/guadalupe-cruz-neuroscience-storytelling-1205 A fascination with storytelling led K. Guadalupe Cruz to graduate studies in neuroscience and shapes her work to promote inclusivity at MIT. Thu, 05 Dec 2019 00:00:00 -0500 Fernanda Ferreira | School of Science http://news.mit.edu/2019/guadalupe-cruz-neuroscience-storytelling-1205 <p>K. Guadalupe Cruz’s path into neuroscience began with storytelling.</p> <p>“For me, it was always interesting that we are capable of keeping knowledge over so many generations,” says Cruz, a PhD student in the Department of Brain and Cognitive Sciences. For millennia, information has been passed down through the stories shared by communities, and Cruz wanted to understand how that information was transferred from one person to the next. “That was one of my first big questions,” she says.</p> <p>Cruz has been asking this question since high school and the urge to answer it led her to anthropology, psychology, and linguistics, but she felt like something was missing. “I wanted a mechanism,” she explains. “So I kept going further and further, and eventually ended up in neuroscience.”</p> <p>As an undergraduate at the University of Arizona, Cruz became fascinated with the sheer complexity of the brain. “We started learning a lot about different animals and how their brains worked,” says Cruz. “I just thought it was so cool,” she adds. That fascination got her into the lab and Cruz has never left. “I’ve been doing research ever since.”</p> <p><strong>A sense of space </strong></p> <p>If you’ve ever seen a model of the brain, you’ve probably seen one that is divided into regions, each shaded with a different color and with its own distinct function. The frontal lobe in red plans, the cerebellum in blue coordinates movement, the hippocampus in green remembers. But this is an oversimplification.</p> <p>“The brain isn’t entirely modular,” says Cruz. Different parts of the brain don’t have a single function, but rather a number of functions, and their complexity increases toward the front of the brain. The intricacy of these frontal regions is embodied in their anatomy: “They have a lot of cells and they’re heavily interconnected,” she explains. These frontal regions encode many types of information, which means they are involved in a number of different functions, sometimes in abstract ways that are difficult to unravel.</p> <p>The frontal region Cruz is bent on demystifying is the anterior cingulate cortex, or ACC, a part of the brain that wraps around the corpus callosum, which divides the outer layers of the brain into left and right hemispheres. Working with mice in Professor Mriganka Sur’s lab, Cruz looks at the role of the ACC in coordinating different downstream brain structures in orientating tasks. In humans, the ACC is involved in motivation, but in mice it has a role in visually guided orienting.</p> <p>“Everything you experience in the world is relative to your own body,” says Cruz. Being able to determine where your body is in space is essential for navigating through the world. To explain this, Cruz gives the example of driver making a turn. “If you have to do a left turn, you’re going to need to use different information to determine whether you’re allowed to make that turn and if that’s the right choice,” Cruz explains. The ACC in this analogy is the driver: It has to take in all the information about the surrounding world, decide what to do, and then send this decision to other parts of the brain that control movement.</p> <p>To study this, Cruz gives mice a simple task: She shows them two squares of different shades on a screen and asks them to move the darker square. “The idea is, how does this area of the brain take in this information, compare the two squares and decide which movement is correct,” she explains. Many researchers study how information gets to the ACC, but Cruz is interested in what happens after the information arrives, focusing on the processing and output ends of the equation, particularly in deciphering the contributions of different brain connections to the resulting action.</p> <p>Cruz uses optogenetics to figure out which areas of the brain are necessary for decision-making. Optogenetics is a technique that uses light to turn on or off previously targeted neurons or areas of the brain. “This allows us to causally test whether parts of a circuit are required for a behavior or not,” she explains. Cruz distills it even further: “But mostly, it just lets us know that if you screw with this area, you’re going to screw something up.”</p> <p><strong>Community builder </strong></p> <p>At MIT, Cruz has been able to ask the neuroscience questions she’s captivated by, but coming to the Institute also made her more aware of how few underrepresented minorities, or URMs, there are in science broadly. “I started realizing how academia is not built for us, or rather, is built to exclude us,” says Cruz. “I saw these problems, and I wanted to do something to address them.”</p> <p>Cruz has focused many of her efforts on community building. “A lot of us come from communities that are very ‘other’ oriented, and focused on helping one another,” she explains. One of her initiatives is Community Lunch, a biweekly casual lunch in the brain and cognitive sciences department. “It’s sponsored by the School of Science for basically anybody that’s a person of color in academia,” says Cruz. The lunch includes graduate students, postdocs, and technicians who come together to talk about their experiences in academia. “It’s kind of like a support group,” she says. Connecting with people that have shared experiences is important, she adds: “You get to talk about things and realize this is a feeling that a lot of people have.”</p> <p>Another goal of Cruz’s is to make sure MIT understands the hurdles that many URMs experience in academia. For instance, applying to graduate school or having to cover costs for conferences can put a real strain on finances. “I applied to 10 programs; I was eating cereal every day for a month,” remembers Cruz. “I try to bring that information to light, because faculty and administrators have often never experienced it.”</p> <p>Cruz also is the representative for the LGBT community on the MIT Graduate Student Council and a member of LGBT Grad, a student group run by and for MIT’s LGBT grad students and postdocs. “LGBT Grad is basically a social club for the community, and we try to organize events to get to know each other,” says Cruz. According to Cruz, graduate school can feel pretty lonely for members of the LGBT community, so, similar to her work with URMs, Cruz concentrates on bringing people together. “I can’t fix the whole system, which can be very frustrating at times, but I focused my efforts on supporting people and allowing us to build a community.”</p> <p>As in her research, Cruz again comes back to the importance of storytelling. In her activism on campus, she wants to make sure the stories of URMs are known and, in doing so, help remove the obstacles faced by that generations of students that come after her.</p> K. Guadalupe Cruz studies the neuroscience of decision-making and creates community in the Department of Brain and Cognitive Sciences. Photo: Steph StevensSchool of Science, Brain and cognitive sciences, Picower Institute, Students, Graduate, postdoctoral, Diversity and inclusion, Women in STEM, Student life, Profile, Community, Neuroscience Controlling attention with brain waves http://news.mit.edu/2019/controlling-attention-brain-waves-1204 Study shows that people can boost attention by manipulating their own alpha brain waves. Wed, 04 Dec 2019 10:52:23 -0500 Anne Trafton | MIT News Office http://news.mit.edu/2019/controlling-attention-brain-waves-1204 <p>Having trouble paying attention? MIT neuroscientists may have a solution for you: Turn down your alpha brain waves. In a new study, the researchers found that people can enhance their attention by controlling their own alpha brain waves based on neurofeedback they receive as they perform a particular task.</p> <p>The study found that when subjects learned to suppress alpha waves in one hemisphere of their parietal cortex, they were able to pay better attention to objects that appeared on the opposite side of their visual field. This is the first time that this cause-and-effect relationship has been seen, and it suggests that it may be possible for people to learn to improve their attention through neurofeedback.</p> <p>“There’s a lot of interest in&nbsp;using neurofeedback to try to help people with various brain disorders and behavioral problems,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “It’s a completely noninvasive way of controlling and testing the role of different types of brain activity.”</p> <p>It’s unknown how long these effects might last and whether this kind of control could be achieved with other types of brain waves, such as beta waves, which are linked to Parkinson’s disease. The researchers are now planning additional studies of whether this type of neurofeedback training might help people suffering from attentional or other neurological disorders.</p> <p>Desimone is the senior author of <a href="https://www.cell.com/neuron/fulltext/S0896-6273(19)30964-X" target="_blank">the paper</a>, which appears in <em>Neuron</em> on Dec. 4. McGovern Institute postdoc Yasaman Bagherzadeh is the lead author of the study. Daniel Baldauf, a former McGovern Institute research scientist, and Dimitrios Pantazis, a McGovern Institute principal research scientist, are also authors of the paper.</p> <p><strong>Alpha and attention</strong></p> <p>There are billions of neurons in the brain, and their combined electrical signals generate oscillations known as brain waves. Alpha waves, which oscillate in the frequency of 8 to 12 hertz, are believed to play a role in filtering out distracting sensory information.</p> <p>Previous studies have shown a strong correlation between attention and alpha brain waves, particularly in the parietal cortex. In humans and in animal studies, a decrease in alpha waves has been linked to enhanced attention. However, it was unclear if alpha waves control attention or are just a byproduct of some other process that governs attention, Desimone says.</p> <p>To test whether alpha waves actually regulate attention, the researchers designed an experiment in which people were given real-time feedback on their alpha waves as they performed a task. Subjects were asked to look at a grating pattern in the center of a screen, and told to use mental effort to increase the contrast of the pattern as they looked at it, making it more visible.</p> <p>During the task, subjects were scanned using magnetoencephalography (MEG), which reveals brain activity with millisecond precision. The researchers measured alpha levels in both the left and right hemispheres of the parietal cortex and calculated the degree of asymmetry between the two levels. As the asymmetry between the two hemispheres grew, the grating pattern became more visible, offering the participants real-time feedback.</p> <p>Although subjects were not told anything about what was happening, after about 20 trials (which took about 10 minutes), they were able to increase the contrast of the pattern. The MEG results indicated they had done so by controlling the asymmetry of their alpha waves.</p> <p>“After the experiment, the subjects said they knew that they were controlling the contrast, but they didn’t know how they did it,” Bagherzadeh says. “We think the basis is conditional learning — whenever you do a behavior and you receive a reward, you’re reinforcing that behavior. People usually don’t have any feedback on their brain&nbsp;activity, but when we provide it to them and reward them, they learn by practicing.”</p> <p><img alt="" src="/sites/mit.edu.newsoffice/files/images/finalyasaman.gif" style="width: 500px; height: 282px;" /></p> <p>Although the subjects were not consciously aware of how they were manipulating their brain waves, they were able to do it, and this success translated into enhanced attention on the opposite side of the visual field. As the subjects looked at the pattern in the center of the screen, the researchers flashed dots of light on either side of the screen. The participants had been told to ignore these flashes, but the researchers measured how their visual cortex responded to them.</p> <p>One group of participants was trained to suppress alpha waves in the left side of the brain, while the other was trained to suppress the right side. In those who had reduced alpha on the left side, their visual cortex showed a larger response to flashes of light on the right side of the screen, while those with reduced alpha on the right side responded more to flashes seen on the left side.</p> <p>“Alpha manipulation really was controlling people’s attention, even though they didn’t have any clear understanding of how they were doing it,” Desimone says.</p> <p><strong>Persistent effect</strong></p> <p>After the neurofeedback training session ended, the researchers asked subjects to perform two additional tasks that involve attention, and found that the enhanced attention persisted. In one experiment, subjects were asked to watch for a grating pattern, similar to what they had seen during the neurofeedback task, to appear. In some of the trials, they were told in advance to pay attention to one side of the visual field, but in others, they were not given any direction.</p> <p>When the subjects were told to pay attention to one side, that instruction was the dominant factor in where they looked. But if they were not given any cue in advance, they tended to pay more attention to the side that had been favored during their neurofeedback training.</p> <p>In another task, participants were asked to look at an image such as a natural outdoor scene, urban scene, or computer-generated fractal shape. By tracking subjects’ eye movements, the researchers found that people spent more time looking at the side that their alpha waves had trained them to pay attention to.</p> <p>“It is promising that the effects did seem to persist afterwards,” says Desimone, though more study is needed to determine how long these effects might last.</p> <p>“It would be interesting to understand how long-lasting these effects are, and whether you can use them therapeutically, because there’s some evidence that alpha oscillations are different in people who have attention deficits and hyperactivity disorders,” says Sabine Kastner, a professor of psychology at the Princeton Neuroscience Institute, who was not involved in the research. “If that is the case, then at least in principle, one might use this neurofeedback method to enhance their attention.”</p> <p>The research was funded by the McGovern Institute.</p> MIT neuroscientists have shown that people can enhance their attention by using neurofeedback to decrease alpha waves in one side of the parietal cortex.Image: Yasaman BaghezadehResearch, Behavior, Brain and cognitive sciences, McGovern Institute, School of Science, Neuroscience Two MIT professors named 2019 fellows of the National Academy of Inventors http://news.mit.edu/2019/tsai-schuh-elected-fellows-national-academy-inventors-1203 Li-Huei Tsai and Christopher Schuh recognized for research innovations addressing Alzheimer’s disease and metal mechanics. Tue, 03 Dec 2019 10:00:01 -0500 David Orenstein | Picower Institute http://news.mit.edu/2019/tsai-schuh-elected-fellows-national-academy-inventors-1203 <p>The National Academy of Inventors has selected two MIT faculty members, neuroscientist Li-Huei Tsai and materials scientist Christopher Schuh, as members of its 2019 class of new fellows.</p> <p>NAI fellows “have demonstrated a highly prolific spirit of innovation in creating or facilitating outstanding inventions that have made a tangible impact on the quality of life, economic development and welfare of society,” the organization stated in its announcement.</p> <p>Schuh is the department head and the Danae and Vasilis Salapatas Professor of Metallurgy in the Department of Materials Science and Engineering. His&nbsp;research is focused on structural metallurgy and seeks to control disorder in metallic microstructures for the purpose of optimizing mechanical properties; much of his work is on the design and control of grain boundary structure and chemistry.</p> <p>Schuh has published dozens of patents and co-founded a number of metallurgical companies. His first MIT spinout company, Xtalic Corporation, commercialized a process from his MIT laboratory to produce stable nanocrystalline coatings, which have now been deployed in over 10 billion individual components in use worldwide. Schuh’s startup Desktop Metal is a metal additive manufacturing company developing 3D metal printers that are sufficiently simpler and lower-cost than current options to enable broad use across many industries. Recently, Schuh co-founded Veloxint Corporation, which is commercializing machine components made from custom stable nanocrystalline alloys designed in his MIT laboratory.</p> <p>Tsai, the Picower Professor of Neuroscience and director of the Picower Institute for Learning and Memory, focuses on neurodegenerative conditions such as Alzheimer’s disease. Her work has generated a dozen patents, many of which have been licensed by biomedical companies including two startups, Cognito Therapeutics and Souvien Bio Ltd., that have spun out from her and collaborator’s labs.</p> <p>Her team’s innovations include inhibiting an enzyme that affects the chromatin structure of DNA to rescue gene expression and restore learning and memory, and using light and sound stimulation to enhance the power and synchrony of 40-hertz gamma rhythms in the brain to reduce Alzheimer’s pathology, prevent neuron death, and preserve learning and memory. Each of these promising sets of findings in mice are now being tested in human trials.</p> <p>Tsai and Schuh join 21 colleagues from MIT who have previously been elected NAI fellows.</p> Li-Huei Tsai, left, is the Picower Professor of Neuroscience and director of The Picower Institute for Learning and Memory, and Christopher Schuh, is department head and the Danae and Vasilis Salapatas Professor of Metallurgy in the Department of Materials Science and Engineering.Photos courtesy of the Picower Institute and the Department of Materials Science and EngineeringDMSE, Picower Institute, School of Engineering, School of Science, Awards, honors and fellowships, Faculty, Brain and cognitive sciences, Innovation and Entrepreneurship (I&E), Alzheimer's, Neuroscience, Materials Science and Engineering Helping machines perceive some laws of physics http://news.mit.edu/2019/adept-ai-machines-laws-physics-1202 Model registers “surprise” when objects in a scene do something unexpected, which could be used to build smarter AI. Mon, 02 Dec 2019 00:00:00 -0500 Rob Matheson | MIT News Office http://news.mit.edu/2019/adept-ai-machines-laws-physics-1202 <p>Humans have an early understanding of the laws of physical reality. Infants, for instance, hold expectations for how objects should move and interact with each other, and will show surprise when they do something unexpected, such as disappearing in a sleight-of-hand magic trick.</p> <p>Now MIT researchers have designed a model that demonstrates an understanding of some basic “intuitive physics” about how objects should behave. The model could be used to help build smarter artificial intelligence and, in turn, provide information to help scientists understand infant cognition.</p> <p>The model, called ADEPT, observes objects moving around a scene and makes predictions about how the objects should behave, based on their underlying physics. While tracking the objects, the model outputs a signal at each video frame that correlates to a level of “surprise” — the bigger the signal, the greater the surprise. If an object ever dramatically mismatches the model’s predictions — by, say, vanishing or teleporting across a scene — its surprise levels will spike.</p> <p>In response to videos showing objects moving in physically plausible and implausible ways, the model registered levels of surprise that matched levels reported by humans who had watched the same videos. &nbsp;</p> <p>“By the time infants are 3 months old, they have some notion that objects don’t wink in and out of existence, and can’t move through each other or teleport,” says first author Kevin A. Smith, a research scientist in the Department of Brain and Cognitive Sciences (BCS) and a member of the Center for Brains, Minds, and Machines (CBMM). “We wanted to capture and formalize that knowledge to build infant cognition into artificial-intelligence agents. We’re now getting near human-like in the way models can pick apart basic implausible or plausible scenes.”</p> <p>Joining Smith on the paper are co-first authors Lingjie Mei, an undergraduate in the Department of Electrical Engineering and Computer Science, and BCS research scientist Shunyu Yao; Jiajun Wu PhD ’19; CBMM investigator Elizabeth Spelke; Joshua B. Tenenbaum, a professor of computational cognitive science, and researcher in CBMM, BCS, and the Computer Science and Artificial Intelligence Laboratory (CSAIL); and CBMM investigator Tomer D. Ullman PhD ’15.</p> <p><strong>Mismatched realities</strong></p> <p>ADEPT relies on two modules: an “inverse graphics” module that captures object representations from raw images, and a “physics engine” that predicts the objects’ future representations from a distribution of possibilities.</p> <p>Inverse graphics basically extracts information of objects —&nbsp;such as shape, pose, and velocity — from pixel inputs. This module captures frames of video as images and uses inverse graphics to extract this information from objects in the scene. But it doesn’t get bogged down in the details. ADEPT requires only some approximate geometry of each shape to function. In part, this helps the model generalize predictions to new objects, not just those it’s trained on.</p> <p>“It doesn’t matter if an object is rectangle or circle, or if it’s a truck or a duck. ADEPT just sees there’s an object with some position, moving in a certain way, to make predictions,” Smith says. “Similarly, young infants also don’t seem to care much about some properties like shape when making physical predictions.”</p> <p>These coarse object descriptions are fed into a physics engine — software that simulates behavior of physical systems, such as rigid or fluidic bodies, and is commonly used for films, video games, and computer graphics. The researchers’ physics engine “pushes the objects forward in time,” Ullman says. This creates a range of predictions, or a “belief distribution,” for what will happen to those objects in the next frame.</p> <p>Next, the model observes the actual next frame. Once again, it captures the object representations, which it then aligns to one of the predicted object representations from its belief distribution. If the object obeyed the laws of physics, there won’t be much mismatch between the two representations. On the other hand, if the object did something implausible — say, it vanished from behind a wall — there will be a major mismatch.</p> <p>ADEPT then resamples from its belief distribution and notes a very low probability that the object had simply vanished. If there’s a low enough probability, the model registers great “surprise” as a signal spike. Basically, surprise is inversely proportional to the probability of an event occurring. If the probability is very low, the signal spike is very high. &nbsp;</p> <p>“If an object goes behind a wall, your physics engine maintains a belief that the object is still behind the wall. If the wall goes down, and nothing is there, there’s a mismatch,” Ullman says. “Then, the model says, ‘There’s an object in my prediction, but I see nothing. The only explanation is that it disappeared, so that’s surprising.’”</p> <p><strong>Violation of expectations</strong></p> <p>In development psychology, researchers run “violation of expectations” tests in which infants are shown pairs of videos. One video shows a plausible event, with objects adhering to their expected notions of how the world works. The other video is the same in every way, except objects behave in a way that violates expectations in some way. Researchers will often use these tests to measure how long the infant looks at a scene after an implausible action has occurred. The longer they stare, researchers hypothesize, the more they may be surprised or interested in what just happened.</p> <p>For their experiments, the researchers created several scenarios based on classical developmental research to examine the model’s core object knowledge. They employed 60 adults to watch 64 videos of known physically plausible and physically implausible scenarios. Objects, for instance, will move behind a wall and, when the wall drops, they’ll still be there or they’ll be gone. The participants rated their surprise at various moments on an increasing scale of 0 to 100. Then, the researchers showed the same videos to the model. Specifically, the scenarios examined the model’s ability to capture notions of permanence (objects do not appear or disappear for no reason), continuity (objects move along connected trajectories), and solidity (objects cannot move through one another).</p> <p>ADEPT matched humans particularly well on videos where objects moved behind walls and disappeared when the wall was removed. Interestingly, the model also matched surprise levels on videos that humans weren’t surprised by but maybe should have been. For example, in a video where an object moving at a certain speed disappears behind a wall and immediately comes out the other side, the object might have sped up dramatically when it went behind the wall or it might have teleported to the other side. In general, humans and ADEPT were both less certain about whether that event was or wasn’t surprising. The researchers also found traditional neural networks that learn physics from observations — but don’t explicitly represent objects — are far less accurate at differentiating surprising from unsurprising scenes, and their picks for surprising scenes don’t often align with humans.</p> <p>Next, the researchers plan to delve further into how infants observe and learn about the world, with aims of incorporating any new findings into their model. Studies, for example, show that infants up until a certain age actually aren’t very surprised when objects completely change in some ways — such as if a truck disappears behind a wall, but reemerges as a duck.</p> <p>“We want to see what else needs to be built in to understand the world more like infants, and formalize what we know about psychology to build better AI agents,” Smith says.</p> An MIT-invented model demonstrates an understanding of some basic “intuitive physics” by registering “surprise” when objects in simulations move in unexpected ways, such as rolling behind a wall and not reappearing on the other side.Image: Christine Daniloff, MITResearch, Computer science and technology, Algorithms, Artificial intelligence, Machine learning, Computer vision, Computer Science and Artificial Intelligence Laboratory (CSAIL), Brain and cognitive sciences, Electrical Engineering & Computer Science (eecs), School of Engineering, Center for Brains Minds and Machines Bot can beat humans in multiplayer hidden-role games http://news.mit.edu/2019/deeprole-ai-beat-humans-role-games-1120 Using deductive reasoning, the bot identifies friend or foe to ensure victory over humans in certain online games. Tue, 19 Nov 2019 23:59:59 -0500 Rob Matheson | MIT News Office http://news.mit.edu/2019/deeprole-ai-beat-humans-role-games-1120 <p>MIT researchers have developed a bot equipped with artificial intelligence that can beat human players in tricky online multiplayer games where player roles and motives are kept secret.</p> <p>Many gaming bots have been built to keep up with human players. Earlier this year, a team from Carnegie Mellon University developed the world’s first bot that can beat professionals in multiplayer poker. DeepMind’s AlphaGo made headlines in 2016 for besting a professional Go player. Several bots have also been built to beat professional chess players or join forces in cooperative games such as online capture the flag. In these games, however, the bot knows its opponents and teammates from the start.</p> <p>At the Conference on Neural Information Processing Systems next month, the researchers will present DeepRole, the first gaming bot that can win online multiplayer games in which the participants’ team allegiances are initially unclear. The bot is designed with novel “deductive reasoning” added into an AI algorithm commonly used for playing poker. This helps it reason about partially observable actions, to determine the probability that a given player is a teammate or opponent. In doing so, it quickly learns whom to ally with and which actions to take to ensure its team’s victory.</p> <p>The researchers pitted DeepRole against human players in more than 4,000 rounds of the online game “The Resistance: Avalon.” In this game, players try to deduce their peers’ secret roles as the game progresses, while simultaneously hiding their own roles. As both a teammate and an opponent, DeepRole consistently outperformed human players.</p> <p>“If you replace a human teammate with a bot, you can expect a higher win rate for your team. Bots are better partners,” says first author Jack Serrino ’18, who majored in electrical engineering and computer science at MIT and is an avid online “Avalon” player.</p> <p>The work is part of a broader project to better model how humans make socially informed decisions. Doing so could help build robots that better understand, learn from, and work with humans.</p> <p>“Humans learn from and cooperate with others, and that enables us to achieve together things that none of us can achieve alone,” says co-author Max Kleiman-Weiner, a postdoc in the Center for Brains, Minds and Machines and the Department of Brain and Cognitive Sciences at MIT, and at Harvard University. “Games like ‘Avalon’ better mimic the dynamic social settings humans experience in everyday life. You have to figure out who’s on your team and will work with you, whether it’s your first day of kindergarten or another day in your office.”</p> <p>Joining Serrino and Kleiman-Weiner on the paper are David C. Parkes of Harvard and Joshua B. Tenenbaum, a professor of computational cognitive science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory and the Center for Brains, Minds and Machines.</p> <p><strong>Deductive bot</strong></p> <p>In “Avalon,” three players are randomly and secretly assigned to a “resistance” team and two players to a “spy” team. Both spy players know all players’ roles. During each round, one player proposes a subset of two or three players to execute a mission. All players simultaneously and publicly vote to approve or disapprove the subset. If a majority approve, the subset secretly determines whether the mission will succeed or fail. If two “succeeds” are chosen, the mission succeeds; if one “fail” is selected, the mission fails. Resistance players must always choose to succeed, but spy players may choose either outcome. The resistance team wins after three successful missions; the spy team wins after three failed missions.</p> <p>Winning the game basically comes down to deducing who is resistance or spy, and voting for your collaborators. But that’s actually more computationally complex than playing chess and poker. “It’s a game of imperfect information,” Kleiman-Weiner says. “You’re not even sure who you’re against when you start, so there’s an additional discovery phase of finding whom to cooperate with.”</p> <p>DeepRole uses a game-planning algorithm called “counterfactual regret minimization” (CFR) — which learns to play a game by repeatedly playing against itself — augmented with deductive reasoning. At each point in a game, CFR looks ahead to create a decision “game tree” of lines and nodes describing the potential future actions of each player. Game trees represent all possible actions (lines) each player can take at each future decision point. In playing out potentially billions of game simulations, CFR notes which actions had increased or decreased its chances of winning, and iteratively revises its strategy to include more good decisions. Eventually, it plans an optimal strategy that, at worst, ties against any opponent.</p> <p>CFR works well for games like poker, with public actions — such as betting money and folding a hand — but it struggles when actions are secret. The researchers’ CFR combines public actions and consequences of private actions to determine if players are resistance or spy.</p> <p>The bot is trained by playing against itself as both resistance and spy. When playing an online game, it uses its game tree to estimate what each player is going to do. The game tree represents a strategy that gives each player the highest likelihood to win as an assigned role. The tree’s nodes contain “counterfactual values,” which are basically estimates for a payoff that player receives if they play that given strategy.</p> <p>At each mission, the bot looks at how each person played in comparison to the game tree. If, throughout the game, a player makes enough decisions that are inconsistent with the bot’s expectations, then the player is probably playing as the other role. Eventually, the bot assigns a high probability for each player’s role. These probabilities are used to update the bot’s strategy to increase its chances of victory.</p> <p>Simultaneously, it uses this same technique to estimate how a third-person observer might interpret its own actions. This helps it estimate how other players may react, helping it make more intelligent decisions. “If it’s on a two-player mission that fails, the other players know one player is a spy. The bot probably won’t propose the same team on future missions, since it knows the other players think it’s bad,” Serrino says.</p> <p><strong>Language: The next frontier</strong></p> <p>Interestingly, the bot did not need to communicate with other players, which is usually a key component of the game. “Avalon” enables players to chat on a text module during the game. “But it turns out our bot was able to work well with a team of other humans while only observing player actions,” Kleiman-Weiner says. “This is interesting, because one might think games like this require complicated communication strategies.”</p> <p>“I was thrilled to see this paper when it came out,” says Michael Bowling, a professor at the University of Alberta whose research focuses, in part, on training computers to play games. “It is really exciting seeing the ideas in DeepStack see broader application outside of poker. [DeepStack has] been so central to AI in chess and Go to situations of imperfect information. But I still wasn't expecting to see it extended so quickly into the situation of a hidden role game like Avalon. Being able to navigate a social deduction scenario, which feels so quintessentially human, is a really important step. There is still much work to be done, especially when the social interaction is more open ended, but we keep seeing that many of the fundamental AI algorithms with self-play learning can go a long way.”</p> <p>Next, the researchers may enable the bot to communicate during games with simple text, such as saying a player is good or bad. That would involve assigning text to the correlated probability that a player is resistance or spy, which the bot already uses to make its decisions. Beyond that, a future bot might be equipped with more complex communication capabilities, enabling it to play language-heavy social-deduction games — such as a popular game “Werewolf” —which involve several minutes of arguing and persuading other players about who’s on the good and bad teams.</p> <p>“Language is definitely the next frontier,” Serrino says. “But there are many challenges to attack in those games, where communication is so key.”</p> DeepRole, an MIT-invented gaming bot equipped with “deductive reasoning,” can beat human players in tricky online multiplayer games where player roles and motives are kept secret.Research, Computer science and technology, Algorithms, Video games, Artificial intelligence, Machine learning, Language, Computer Science and Artificial Intelligence Laboratory (CSAIL), Brain and cognitive sciences, Electrical Engineering & Computer Science (eecs), School of Engineering Students push to speed up artificial intelligence adoption in Latin America http://news.mit.edu/2019/students-push-to-speed-up-ai-adoption-latin-america-1119 To help the region catch up, students organize summit to bring Latin policymakers and researchers to MIT. Tue, 19 Nov 2019 16:30:01 -0500 Kim Martineau | MIT Quest for Intelligence http://news.mit.edu/2019/students-push-to-speed-up-ai-adoption-latin-america-1119 <p>Omar Costilla Reyes reels off all the ways that artificial intelligence might benefit his native Mexico. It could raise living standards, he says, lower health care costs, improve literacy and promote greater transparency and accountability in government.</p> <p>But Mexico, like many of its Latin American neighbors, has failed to invest as heavily in AI as other developing countries. That worries <a href="https://omarcostilla.mit.edu/">Costilla Reyes</a>, a postdoc at MIT’s Department of Brain and Cognitive Sciences.</p> <p>To give the region a nudge, Costilla Reyes and three other MIT graduate students — <a href="https://www.media.mit.edu/people/gbernal/overview/" target="_blank">Guillermo Bernal</a>, <a href="https://polisci.mit.edu/people/emilia-simison">Emilia Simison</a> and <a href="https://www.media.mit.edu/people/pe25171/overview/">Pedro Colon-Hernandez</a> — have spent the last six months putting together a three-day event that will &nbsp;bring together policymakers and AI researchers in Latin America with AI researchers in the United States. The <a href="http://ailatinsum.mit.edu/">AI Latin American sumMIT</a> will take place in January at the <a href="https://www.media.mit.edu/">MIT Media Lab</a>.</p> <p>“Africa is getting lots of support — Africa will eventually catch up,” Costilla Reyes says. “You don’t see anything like that in Latin America, despite the potential for AI to move the region forward socially and economically.”</p> <p><strong>Four paths to MIT and research inspired by AI</strong></p> <p>Each of the four students took a different route to MIT, where AI plays a central role in their work — on the brain, voice assistants, augmented creativity and politics. Costilla Reyes got his first computer in high school, and though it had only dial-up internet access, it exposed him to a world far beyond his home city of Toluca. He studied for a PhD &nbsp;at the University of Manchester, where he developed an <a href="https://www.manchester.ac.uk/discover/news/ai-footstep-recognition-system-could-be-used-for-airport-security/">AI system</a> with applications in security and health to identify individuals by their gait. At MIT, Costilla Reyes is building computational models of how firing neurons in the brain produce memory and cognition, information he hopes can also advance AI.</p> <p>After graduating from a vocational high school in El Salvador, Bernal moved in with relatives in New Jersey and studied English at a nearby community college. He continued on to Pratt Institute, where he learned to incorporate Python into his design work. Now at the MIT Media Lab, he’s developing interactive storytelling tools like <a href="https://www.media.mit.edu/projects/paper-dreams/overview/">PaperDreams</a> that uses AI to help people unlock their creativity. His work recently won a <a href="https://arts.mit.edu/2019-harold-and-arlene-schnitzer-prize-in-the-visual-arts/">Schnitzer Prize</a>.&nbsp;</p> <p>Simison came to MIT to study for a PhD in political science after her professors at Argentina’s University Torcuato Di Tella encouraged her to continue her studies in the United States. She is currently using text analysis tools to mine archival records in Brazil and Argentina to understand the role that political parties and unions played under the last dictatorships in both countries.</p> <p>Colon-Hernandez grew up in Puerto Rico fascinated with video games. A robotics class in high school inspired him to build a computer to play video games of his own, which led to a degree in computer engineering at the University of Puerto Rico at Mayagüez.&nbsp;After helping a friend with a project at MIT Lincoln Laboratory, Colon-Hernandez applied to a summer research program at MIT, and later, the MIT Media Lab’s graduate program. He’s currently working on intelligent voice assistants.</p> <p>It’s hard to generalize about a region as culturally diverse and geographically vast as Latin America, stretching from Mexico and the Caribbean to the tip of South America. But protests, violence and reports of entrenched corruption have dominated the news for years, and the average income per person has been <a href="https://www.economist.com/the-americas/2019/05/30/why-latin-americas-economies-are-stagnating">falling</a> with respect to the United States since the 1950s. All four students see AI as a means to bring stability and greater opportunity to their home countries.</p> <p><strong>AI with a humanitarian agenda</strong></p> <p>The idea to bring Latin American policymakers to MIT was hatched last December, at the world’s premier conference for AI research, <a href="https://nips.cc/">NeurIPS</a>. The organizers of NeurIPS had launched several new workshops to promote diversity in response to growing criticism of the exclusion of women and minorities in tech. At <a href="https://www.latinxinai.org/neurips-2018">Latinx,</a> a workshop for Latin American students, Costilla Reyes met Colon-Hernandez, who was giving a talk on voice-activated wearables. A few hours later they began drafting a plan to bring a Latinx-style event to MIT.</p> <p>Back in Cambridge, they found support from <a href="https://people.csail.mit.edu/asolar/">Armando Solar-Lezama</a>, a <a href="http://news.mit.edu/2017/faculty-profile-armando-solar-lezama-0526">native of Mexico</a> and a professor at MIT’s <a href="https://www.eecs.mit.edu/">Department of Electrical Engineering and Computer Science</a>. They also began knocking on doors for funding, securing an initial $25,000 grant from MIT’s <a href="https://diversity.mit.edu/">Institute Community and Equity Office</a>. Other graduate students joined the cause, including, and together they set out to recruit speakers, reserve space at the MIT Media Lab and design a website. RIMAC, the MIT-IBM Watson AI Lab, X Development, and Facebook have all since offered support for the event.</p> <p>Unlike other AI conferences, this one has a practical bent, with themes that echo many of the UN Sustainable Development Goals: to end extreme poverty, develop quality education, create fair and transparent institutions, address climate change and provide good health.</p> <p>The students have set similarly concrete goals for the conference, from mapping the current state of AI-adoption across Latin America to outlining steps policymakers can take to coordinate efforts. U.S. researchers will offer tutorials on open-source AI platforms like TensorFlow and scikit-learn for Python, and the students are continuing to raise money to fly 10 of their counterparts from Latin America to attend the poster session.</p> <p>“We reinvent the wheel so much of the time,” says Simison. “If we can motivate countries to integrate their efforts, progress could move much faster.”</p> <p>The potential rewards are high. A <a href="https://www.accenture.com/_acnmedia/pdf-49/accenture-how-artificial-intelligence-can-drive-south-americas-growth.pdf">2017 report</a> by Accenture estimated that if AI were integrated into South America’s top five economies — Argentina, Brazil, Chile, Colombia and Peru — which generate about 85 percent of the continent’s economic output, they could each add up to 1 percent to their annual growth rate.</p> <p>In developed countries like the U.S. and in Europe, AI is sometimes viewed apprehensively for its potential to eliminate jobs, spread misinformation and perpetuate bias and inequality. But the risk of not embracing AI, especially in countries that are already lagging behind economically, is potentially far greater, says Solar-Lezama. “There’s an urgency to make sure these countries have a seat at the table and can benefit from what will be one of the big engines for economic development in the future,” he says.</p> <p>Post-conference deliverables include a set of recommendations for policymakers to move forward. “People are protesting across the entire continent due to the marginal living conditions that most face,” says Costilla Reyes. “We believe that AI plays a key role now, and in the future development of the region, if it’s used in the right way.”</p> “We believe that AI plays a key role now, and in the future development of the region, if it’s used in the right way,” says Omar Costilla Reyes, one of four MIT graduate students working to help Latin America adopt artificial intelligence technologies. Pictured here (left to right) are Costilla Reyes, Emilia Simison, Pedro Antonio Colon-Hernandez, and Guillermo Bernal.Photo: Kim MartineauQuest for Intelligence, Electrical engineering and computer science (EECS), Media Lab, Brain and cognitive sciences, Lincoln Laboratory, MIT-IBM Watson AI Lab, School of Engineering, School of Science, School of Humanities Arts and Social Sciences, Artificial intelligence, Computer science and technology, Technology and society, Machine learning, Software, Algorithms, Political science, Latin America School of Science appoints 14 faculty members to named professorships http://news.mit.edu/2019/14-mit-faculty-members-appointed-named-professorships-school-science-1104 Those selected for these positions receive additional support to pursue their research and develop their careers. Mon, 04 Nov 2019 11:50:01 -0500 School of Science http://news.mit.edu/2019/14-mit-faculty-members-appointed-named-professorships-school-science-1104 <p>The <a href="http://science.mit.edu">School of Science</a> has announced that 14 of its faculty members have been appointed to named professorships. The faculty members selected for these positions receive additional support to pursue their research and develop their careers.</p> <p><a href="https://web.mit.edu/physics/people/faculty/comin_riccardo.html">Riccardo Comin</a> is an assistant professor in the Department of Physics. He has been named a Class of 1947 Career Development Professor. This three-year professorship is granted in recognition of the recipient's outstanding work in both research and teaching. Comin is interested in condensed matter physics. He uses experimental methods to synthesize new materials, as well as analysis through spectroscopy and scattering to investigate solid state physics. Specifically, the Comin lab attempts to discover and characterize electronic phases of quantum materials. Recently, his lab, in collaboration with colleagues, discovered that weaving a conductive material into a particular pattern known as the “kagome” pattern can result in quantum behavior when electricity is passed through.</p> <p><a href="https://biology.mit.edu/profile/joseph-joey-davis/">Joseph Davis</a>, assistant professor in the Department of Biology, has been named a Whitehead Career Development Professor. He looks at how cells build and deconstruct complex molecular machinery. The work of his lab group relies on biochemistry, biophysics, and structural approaches that include spectrometry and microscopy. A current project investigates the formation of the ribosome, an essential component in all cells. His work has implications for metabolic engineering, drug delivery, and materials science.</p> <p><a href="https://math.mit.edu/directory/profile.php?pid=1461">Lawrence Guth</a> is now the Claude E. Shannon (1940) Professor of Mathematics. Guth explores harmonic analysis and combinatorics, and he is also interested in metric geometry and identifying connections between geometric inequalities and topology. The subject of metric geometry revolves around being able to estimate measurements, including length, area, volume and distance, and combinatorial geometry is essentially the estimation of the intersection of patterns in simple shapes, including lines and circles.</p> <p><a href="https://bcs.mit.edu/users/mhalassamitedu">Michael Halassa</a>, an assistant professor in the Department of Brain and Cognitive Sciences, will hold the three-year Class of 1958 Career Development Professorship. His area of interest is brain circuitry. By investigating the networks and connections in the brain, he hopes to understand how they operate — and identify any ways in which they might deviate from normal operations, causing neurological and psychiatric disorders. Several publications from his lab discuss improvements in the treatment of the deleterious symptoms of autism spectrum disorder and schizophrenia, and his latest news provides insights on how the brain filters out distractions, particularly noise. Halassa is an associate investigator at the McGovern Institute for Brain Research and an affiliate member of the Picower Institute for Learning and Memory.</p> <p><a href="https://biology.mit.edu/profile/sebastian-lourido/">Sebastian Lourido</a>, an assistant professor and the new Latham Family Career Development Professor in the Department of Biology for the next three years, works on treatments for infectious disease by learning about parasitic vulnerabilities. Focusing on human pathogens, Lourido and his lab are interested in what allows parasites to be so widespread and deadly, looking on a molecular level. This includes exploring how calcium regulates eukaryotic cells, which, in turn, affect processes such as muscle contraction and membrane repair, in addition to kinase responses.</p> <p><a href="https://glaciers.mit.edu/">Brent Minchew</a> is named a Cecil and Ida Green Career Development Professor for a three-year term. Minchew, a faculty member in the Department of Earth, Atmospheric and Planetary Sciences, studies glaciers using modeling and remote sensing methods, such as interferometric synthetic aperture radar. His research into glaciers, including their mechanics, rheology, and interactions with their surrounding environment, extends as far as observing their responses to climate change. His group recently determined that Antarctica, in a worst-case scenario climate projection, would not contribute as much as predicted to rising sea level.</p> <p><a href="https://nedivilab.mit.edu/">Elly Nedivi</a>, a professor in the departments of Brain and Cognitive Sciences and Biology, has been named the <a href="https://science.mit.edu/nedivi-named-to-new-professorship/">inaugural</a> William R. (1964) And Linda R. Young Professor. She works on brain plasticity, defined as the brain’s ability to adapt with experience, by identifying genes that play a role in plasticity and their neuronal and synaptic functions. In one of her lab’s recent publications, they suggest that variants of a particular gene may undermine expression or production of a protein, increasing the risk of bipolar disorder. In addition, she collaborates with others at MIT to develop new microscopy tools that allow better analysis of brain connectivity. Nedivi is also a member of the Picower Institute for Learning and Memory.</p> <p><a href="https://math.mit.edu/directory/profile.php?pid=1698">Andrei Negu</a><a href="http://math.mit.edu/directory/profile.php?pid=1698" target="_blank">ț</a> has been named a Class of 1947 Career Development Professor for a three-year term. Neguț, a member of the Department of Mathematics, fixates on problems in geometric representation theory. This topic requires investigation within algebraic geometry and representation theory simultaneously, with implications for mathematical physics, symplectic geometry, combinatorics and probability theory.</p> <p><a href="https://eapsweb.mit.edu/people/mpec">Matĕj Peč</a>, the Victor P. Starr Career Development Professor in the Department of Earth, Atmospheric and Planetary Science until 2021, studies how the movement of the Earth’s tectonic plates affects rocks, mechanically and microstructurally. To investigate such a large-scale topic, he utilizes high-pressure, high-temperature experiments in a lab to simulate the driving forces associated with plate motion, and compares results with natural observations and theoretical modeling. His lab has identified a particular boundary beneath the Earth’s crust where rock properties shift from brittle, like peanut brittle, to viscous, like honey, and determined how that layer accommodates building strain between the two. In his investigations, he also considers the effect on melt generation miles underground.</p> <p><a href="https://web.mit.edu/physics/people/faculty/perez_kerstin.html">Kerstin Perez</a> has been named the three-year Class of 1948 Career Development Professor in the Department of Physics. Her research interest is dark matter. She uses novel analytical tools, such as those affixed on a balloon-borne instrument that can carry out processes similar to that of a particle collider (like the Large Hadron Collider) to detect new particle interactions in space with the help of cosmic rays. In another research project, Perez uses a satellite telescope array on Earth to search for X-ray signatures of mysterious particles. Her work requires heavy involvement with collaborative observatories, instruments, and telescopes. Perez is affiliated with the Kavli Institute for Astrophysics and Space Research.</p> <p><a href="https://math.mit.edu/directory/profile.php?pid=213">Bjorn Poonen</a>, named a Distinguished Professor of Science in the Department of Mathematics, studies number theory and algebraic geometry. He and his colleagues generate algorithms that can solve polynomial equations with the particular requirement that the solutions be rational numbers. These types of problems can be useful in encoding data. He also helps to determine what is undeterminable, that is exploring the limits of computing.</p> <p><a href="https://chemistry.mit.edu/profile/daniel-leif-migdow-suess/">Daniel Suess</a>, named a Class of 1948 Career Development Professor in the Department of Chemistry, uses molecular chemistry to explain global biogeochemical cycles. In the fields of inorganic and biological chemistry, Suess and his lab look into understanding complex and challenging reactions and clustering of particular chemical elements and their catalysts. Most notably, these reactions include those that are essential to solar fuels. Suess’s efforts to investigate both biological and synthetic systems have broad aims of both improving human health and decreasing environmental impacts.</p> <p><a href="https://chemistry.mit.edu/profile/alison-wendlandt/">Alison Wendlandt</a> is the new holder of the five-year Cecil and Ida Green Career Development Professorship. In the Department of Chemistry, the Wendlandt research group focuses on physical organic chemistry and organic and organometallic synthesis to develop reaction catalysts. Her team fixates on designing new catalysts, identifying processes to which these catalysts can be applied, and determining principles that can expand preexisting reactions. Her team’s efforts delve into the fields of synthetic organic chemistry, reaction kinetics, and mechanics.</p> <p><a href="https://eapsweb.mit.edu/people/jdewit">Julien de Wit</a>, a Department of Earth, Atmospheric and Planetary Sciences assistant professor, has been named a Class of 1954 Career Development Professor. He combines math and science to answer questions about big-picture planetary questions. Using data science, de Wit develops new analytical techniques for mapping exoplanetary atmospheres, studies planet-star interactions of planetary systems, and determines atmospheric and planetary properties of exoplanets from spectroscopic information. He is a member of the scientific team involved in the Search for habitable Planets EClipsing ULtra-cOOl Stars (SPECULOOS) and the TRANsiting Planets and Planetesimals Small Telescope (TRAPPIST), made up of an international collection of observatories. He is affiliated with the Kavli Institute.</p> Clockwise from top left: Riccardo Comin, Joseph Davis, Lawrence Guth, Michael Halassa, Sebastian Lourido, Brent Minchew, Elly Nedivi, Andrei Neguț, Matĕj Peč, Kerstin Perez, Bjorn Poonen, Daniel Suess, Alison Wendlandt, and Julien de Wit Photos courtesy of the faculty.School of Science, Physics, Biology, Mathematics, Brain and cognitive sciences, McGovern Institute, Picower Institute, EAPS, Kavli Institute, Chemistry, Faculty, Awards, honors and fellowships MIT announces framework to guide negotiations with publishers http://news.mit.edu/2019/mit-announces-framework-guide-negotiations-publishers-1023 Principle-based framework aims to support the needs of scholars, reflect MIT principles, and advance science. Wed, 23 Oct 2019 10:55:01 -0400 Brigham Fay | MIT Libraries http://news.mit.edu/2019/mit-announces-framework-guide-negotiations-publishers-1023 <p>The MIT Libraries, together with the MIT Committee on the Library System and the Ad Hoc Task Force on Open Access to MIT’s Research, announced that it has developed a <a href="http://libraries.mit.edu/scholarly/publishing/framework/" target="_blank">principle-based framework</a> to guide negotiations with scholarly publishers. The framework emerges directly from the core principles for open science and open scholarship articulated in the recommendations of the <a href="http://open-access.mit.edu/" target="_blank">Task Force on Open Access to MIT’s Research</a>, which released its <a href="http://news.mit.edu/2019/open-access-task-force-releases-final-recommendations-1017" target="_self">final report</a> to the MIT community on Oct. 17.</p> <p>The framework affirms the overarching principle that control of scholarship and its dissemination should reside with scholars and their institutions. It aims to ensure that scholarly research outputs are openly and equitably available to the broadest possible audience, while also providing valued services to the MIT community.</p> <p>“The value of scholarly content primarily comes from researchers, authors, and peer reviewers — the people who are creating knowledge and reviewing and improving it,” says Roger Levy, associate professor of brain and cognitive sciences and chair of the Committee on the Library System. “We think authors should have considerable rights to their own intellectual outputs.”</p> <p>In MIT’s model, institutions and scholars maintain the rights to share their work openly via institutional repositories, and publishers are paid for the services valued by authors and readers, such as curation and peer-review management.&nbsp;&nbsp;</p> <p>“The MIT Framework gives us a starting point for imagining journals as a service,” says Chris Bourg, director of the MIT Libraries.</p> <p>The framework was developed by members of the Open Access Task Force, the Committee on the Library System, and MIT Libraries staff, and vetted by faculty groups across the Institute.</p> <p>“The ideas in the framework are not new for MIT, which has been a leader in sharing its knowledge with the world,” says Bourg. “This is a clear articulation by the MIT faculty of what they want in scholarly communications — a scholar-led, open, and equitable environment that promises to advance knowledge and its applications. It is also a model that we think will be appealing for a diverse range of scholarly institutions, from private research-intensive universities like MIT to small liberal arts colleges and large public universities.”</p> <p>“The six core principles of the MIT Framework free researchers and research institutions to follow their own lights in sharing their research, and help ensure that scholarly communities will retain control of scholarly communication,” says Peter Suber, director of the Harvard University Library Office for Scholarly Communication.&nbsp;</p> <p>While MIT intends to rely on this framework as a guide for relationships with publishers regardless of the actions of any peer institutions or other organizations, institutions ranging from large research universities to liberal arts colleges have decided to endorse the framework in recognition of its potential to advance open scholarship and the public good.</p> <p>“The MIT Framework values the labor and rights of authors, while respecting a role for journals and publishers,” says Janelle Wertzberger, assistant dean and director of scholarly communications at Gettysburg College. “It balances author rights with user benefits by ensuring that published research will reach the widest possible audience. This approach aims to realign the current publishing system with the needs of all stakeholders within the system, and thereby creates positive change for all.”</p> <p>A full list of endorsers is available at <a href="https://libraries.mit.edu/scholarly/publishing/framework/" target="_blank">libraries.mit.edu/scholarly/publishing/framework</a>.<em> </em>Additional institutions are also invited to add their endorsement on this page.&nbsp;</p> <p>MIT originally passed its <a href="http://libraries.mit.edu/scholarly/mit-open-access/open-access-policy/" target="_blank">Faculty Open Access Policy</a> in 2009; it was one of the first in the country and the first to be adopted university-wide. Today close to 50 percent of MIT faculty-authored journal articles are freely available in <a href="http://dspace.mit.edu/" target="_blank">DSpace@MIT</a>, the Institute’s repository.</p> The MIT Libraries, together with the MIT Committee on the Library System and the Ad Hoc Task Force on Open Access to MIT’s Research, has developed a principle-based framework to guide negotiations with scholarly publishers. Photo: Jake BelcherLibraries, Open access, Research, Brain and cognitive sciences, School of Science Drug combination reverses hypersensitivity to noise http://news.mit.edu/2019/autism-hypersensitivity-noise-drug-1021 Findings in mice suggest targeting certain brain circuits could offer new ways to treat some neurological disorders. Mon, 21 Oct 2019 10:59:59 -0400 Anne Trafton | MIT News Office http://news.mit.edu/2019/autism-hypersensitivity-noise-drug-1021 <p>People with autism often experience hypersensitivity to noise and other sensory input. MIT neuroscientists have now identified two brain circuits that help tune out distracting sensory information, and they have found a way to reverse noise hypersensitivity in mice by boosting the activity of those circuits.</p> <p>One of the circuits the researchers identified is involved in filtering noise, while the other exerts top-down control by allowing the brain to switch its attention between different sensory inputs.</p> <p>The researchers showed that restoring the function of both circuits worked much better than treating either circuit alone. This demonstrates the benefits of mapping and targeting multiple circuits involved in neurological disorders, says Michael Halassa, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.</p> <p>“We think this work has the potential to transform how we think about neurological and psychiatric disorders, [so that we see them] as a combination of circuit deficits,” says Halassa, the senior author of the study. “The way we should approach these brain disorders is to map, to the best of our ability, what combination of deficits are there, and then go after that combination.”</p> <p>MIT postdoc Miho Nakajima and research scientist L. Ian Schmitt are the lead authors of the paper, which appears in <em>Neuron</em> on Oct. 21. Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of the McGovern Institute, is also an author of the paper.</p> <p><strong>Hypersensitivity</strong></p> <p>Many gene variants have been linked with autism, but most patients have very few, if any, of those variants. One of those genes is ptchd1, which is mutated in about 1 percent of people with autism. In a <a href="http://news.mit.edu/2016/basis-attention-deficits-adhd-treatments-0323">2016 study</a>, Halassa and Feng found that during development this gene is primarily expressed in a part of the thalamus called the thalamic reticular nucleus (TRN).&nbsp;</p> <p>That study revealed that neurons of the TRN help the brain to adjust to changes in sensory input, such as noise level or brightness. In mice with ptchd1 missing, TRN neurons fire too fast, and they can’t adjust when noise levels change. This prevents the TRN from performing its usual sensory filtering function, Halassa says.</p> <p>“Neurons that are there to filter out noise, or adjust the overall level of activity, are not adapting. Without the ability to fine-tune the overall level of activity, you can get overwhelmed very easily,” he says.</p> <p>In the 2016 study, the researchers also found that they could restore some of the mice’s noise filtering ability by treating them with a drug called EBIO that activates neurons’ potassium channels. EBIO has harmful cardiac side effects so likely could not be used in human patients, but other drugs that boost TRN activity may have a similar beneficial effect on hypersensitivity, Halassa says.</p> <p>In the new <em>Neuron</em> paper, the researchers delved more deeply into the effects of ptchd1, which is also expressed in the prefrontal cortex. To explore whether the prefrontal cortex might play a role in the animals’ hypersensitivity, the researchers used a task in which mice have to distinguish between three different tones, presented with varying amounts of background noise.</p> <p>Normal mice can learn to use a cue that alerts them whenever the noise level is going to be higher, improving their overall performance on the task. A similar phenomenon is seen in humans, who can adjust better to noisier environments when they have some advance warning, Halassa says. However, mice with the ptchd1 mutation were unable to use these cues to improve their performance, even when their TRN deficit was treated with EBIO.</p> <p>This suggested that another brain circuit must be playing a role in the animals’ ability to filter out distracting noise. To test the possibility that this circuit is located in the prefrontal cortex, the researchers recorded from neurons in that region while mice lacking ptch1 performed the task. They found that neuronal activity died out much faster in these mice than in the prefrontal cortex of normal mice. That led the researchers to test another drug, known as modafinil, which is FDA-approved to treat narcolepsy and is sometimes prescribed to improve memory and attention.</p> <p>The researchers found that when they treated mice missing ptchd1 with both modafinil and EBIO, their hypersensitivity disappeared, and their performance on the task was the same as that of normal mice.</p> <p><strong>Targeting circuits</strong></p> <p>This successful reversal of symptoms suggests that the mice missing ptchd1 experience a combination of circuit deficits that each contribute differently to noise hypersensitivity. One circuit filters noise, while the other helps to control noise filtering based on external cues. Ptch1 mutations affect both circuits, in different ways that can be treated with different drugs.</p> <p>Both of those circuits could also be affected by other genetic mutations that have been linked to autism and other neurological disorders, Halassa says. Targeting those circuits, rather than specific genetic mutations, may offer a more effective way to treat such disorders, he says.</p> <p>“These circuits are important for moving things around the brain — sensory information, cognitive information, working memory,” he says. “We’re trying to reverse-engineer circuit operations in the service of figuring out what to do about a real human disease.”</p> <p>He now plans to study circuit-level disturbances that arise in schizophrenia. That disorder affects circuits involving cognitive processes such as inference — the ability to draw conclusions from available information.</p> <p>The research was funded by the Simons Center for the Social Brain at MIT, the Stanley Center for Psychiatric Research at the Broad Institute, the McGovern Institute for Brain Research at MIT, the Pew Foundation, the Human Frontiers Science Program, the National Institutes of Health, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, a Japan Society for the Promotion of Science Fellowship, and a National Alliance for the Research of Schizophrenia and Depression Young Investigator Award.</p> MIT neuroscientists have identified two brain circuits that help tune out distracting sensory information.Image: MIT NewsResearch, Autism, Brain and cognitive sciences, McGovern Institute, School of Science, Neuroscience, National Institutes of Health (NIH) Open access task force releases final recommendations http://news.mit.edu/2019/open-access-task-force-releases-final-recommendations-1017 Report urges MIT community to openly share the products of its research and teaching. Thu, 17 Oct 2019 15:00:01 -0400 Brigham Fay | MIT Libraries http://news.mit.edu/2019/open-access-task-force-releases-final-recommendations-1017 <p>The&nbsp;<a href="https://open-access.mit.edu/" target="_blank">Ad Hoc Task Force on Open Access to MIT’s Research</a>&nbsp;has released its <a href="https://open-access.mit.edu/sites/default/files/OA-Final-Report.pdf" target="_blank">final recommendations</a>, which aim to support and increase the open sharing of MIT publications, data, software, and educational materials.&nbsp;</p> <p>The Institute-wide open access (OA) task force, convened by Provost Martin Schmidt in July 2017, was charged with exploring how MIT should update and revise MIT’s current OA policies to “further the Institute’s mission of disseminating the fruits of its research and scholarship as widely as possible.” A draft set of recommendations was released in March 2019 for public comment, and this valuable input provided by the community was incorporated into the final recommendations.&nbsp;</p> <p>“In 2009, MIT made a bold statement when it passed one of the country’s first faculty open access policies and the first to be university-wide,” says MIT Libraries Director Chris Bourg, co-chair of the task force with Hal Abelson, Class of 1922 Professor of Electrical Engineering and Computer Science. “Ten years later, we remain convinced that openly sharing research and educational materials is key to the MIT mission of advancing knowledge and bringing that knowledge to bear on the world’s greatest challenges. Through the course of our work, the task force heard from MIT community members who are passionate about extending the reach of their work, and we feel our recommendations provide policies and infrastructure to support that.”</p> <p>The recommendations include ratifying an Institute-wide set of principles for open science and open scholarship, which affirm MIT’s larger commitment to the idea that scholarship and its dissemination should remain in the hands of researchers and their institutions. The MIT Libraries are working with the task force and the Committee on the Library System to develop a framework for negotiations with publishers based on these principles.&nbsp;</p> <p>Recommendations to broaden the MIT Faculty Open Access Policy to cover all MIT authors and to adopt an OA policy for monographs received widespread support across the Institute and in the broader community. The task force also calls for heads of departments, labs, and centers to develop discipline-specific plans to encourage and support open sharing. The libraries have already begun working with the departments of Linguistics and Philosophy and Brain and Cognitive Sciences to develop sample plans.&nbsp;</p> <p>“Scholarship serves humanity best when it is available to everyone,” says Abelson. “These recommendations reinforce MIT's leadership in open access to scholarship.”</p> <p>In an email to the MIT community, Provost Martin Schmidt announced that he would appoint an implementation team this fall to prioritize and enact the task force’s recommendations. He has asked Chris Bourg to convene and lead this team.</p> The Ad Hoc Task Force on Open Access to MIT’s Research has released its final recommendations, which aim to support and increase the open sharing of MIT publications, data, software, and educational materials. Photo: Dominick ReuterLibraries, Open access, Linguistics and Philosophy, Brain and cognitive sciences, School of Engineering, Electrical engineering and computer science (EECS), Computer Science and Artificial Intelligence Laboratory (CSAIL), School of Science, School of Humanities Arts and Social Sciences, Digital humanities, Research, Community Controlling our internal world http://news.mit.edu/2019/controlling-our-internal-world-1016 Design principles from robotics help researchers decipher elements controlling mental processes in the brain. Wed, 16 Oct 2019 13:15:01 -0400 Sabbi Lall | McGovern Institute for Brain Research http://news.mit.edu/2019/controlling-our-internal-world-1016 <div> <p>Olympic skaters can launch, perform multiple aerial turns, and land gracefully, anticipating imperfections and reacting quickly to correct course. To make such elegant movements, the brain must have an internal model of the body to control, predict, and make almost-instantaneous adjustments to motor commands. So-called “internal models” are a fundamental concept in engineering and have long been suggested to underlie control of movement by the brain, but what about processes that occur in the absence of movement, such as contemplation, anticipation, planning?</p> <p>Using a novel combination of task design, data analysis, and modeling, MIT neuroscientist <a href="https://mcgovern.mit.edu/profile/mehrdad-jazayeri/" rel="noopener noreferrer" target="_blank">Mehrdad Jazayeri</a> and colleagues now provide compelling evidence that the core elements of an internal model also control purely mental processes.</p> <p>“During my thesis, I realized that I’m interested not so much in how our senses react to sensory inputs, but instead in how my internal model of the world helps me make sense of those inputs,” says Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.</p> <p>Indeed, understanding the building blocks exerting control of such mental processes could help to paint a better picture of disruptions in mental disorders, such as&nbsp;<a href="https://mcgovern.mit.edu/research-areas/schizophrenia/" rel="noopener noreferrer" target="_blank">schizophrenia</a>.</p> <p><strong>Internal models for mental processes</strong></p> <p>Scientists working on the motor system have long theorized that the brain overcomes noisy and slow signals using an accurate internal model of the body. This internal model serves three critical functions: it provides motor to control movement, simulates upcoming movement to overcome delays, and uses feedback to make real-time adjustments.</p> <p>“The framework that we currently use to think about how the brain controls our actions is one that we have borrowed from robotics: We use controllers, simulators, and sensory measurements to control machines and train operators,” explains Reza Shadmehr, a professor at the Johns Hopkins School of Medicine who was not involved with the study. “That framework has largely influenced how we imagine our brain controlling our movements.”</p> <p>Jazazyeri and colleagues wondered whether the same framework might explain the control principles governing mental states in the absence of any movement.</p> <p>“When we’re simply sitting, thoughts and images run through our heads and, fundamental to intellect, we can control them,” explains lead author Seth Egger, a former postdoc in the Jazayeri lab who is now at Duke University. “We wanted to find out what’s happening between our ears when we are engaged in thinking.”</p> <p>Imagine, for example, a sign language interpreter keeping up with a fast speaker. To track speech accurately, the translator continuously anticipates where the speech is going, rapidly adjusting when the actual words deviate from the prediction. The interpreter could be using an internal model to anticipate upcoming words, and use feedback to make adjustments on the fly.</p> <p><strong>1-2-3-Go</strong></p> <p>Hypothesizing about how the components of an internal model function in scenarios such as translation is one thing. Cleanly measuring and proving the existence of these elements is much more complicated, as the activity of the controller, simulator, and feedback are intertwined. To tackle this problem, Jazayeri and colleagues devised a clever task with primate models in which the controller, simulator, and feedback act at distinct times.</p> <p>In this task, called “1-2-3-Go,” the animal sees three consecutive flashes (1, 2, and 3) that form a regular beat, and learns to make an eye movement (Go) when they anticipate the 4th flash should occur. During the task, researchers measured neural activity in a region of the frontal cortex they had previously linked to the timing of movement.</p> <p>Jazayeri and colleagues had clear predictions about when the controller would act (between the third flash and “Go”) and when feedback would be engaged (with each flash of light). The key surprise came when researchers saw evidence for the simulator anticipating the third flash. This unexpected neural activity has dynamics that resemble the controller, but was not associated with a response. In other words, the researchers uncovered a covert plan that functions as the simulator, thus uncovering all three elements of an internal model for a mental process, the planning and anticipation of “Go” in the “1-2-3-Go” sequence.</p> <p>“Jazayeri’s work is important because it demonstrates how to study mental simulation in animals,” explains Shadmehr, “and where in the brain that simulation is taking place.”</p> <p>Having found how and where to measure an internal model in action, Jazayeri and colleagues now plan to ask whether these control strategies can explain how primates effortlessly generalize their knowledge from one behavioral context to another. For example, how does an interpreter rapidly adjust when someone with widely different speech habits takes the podium? This line of investigation promises to shed light on high-level mental capacities of the primate brain that simpler animals seem to lack, that go awry in mental disorders, and that designers of artificial intelligence systems so fondly seek.</p> </div> MIT neuroscientists have shown that the core elements of an internal model also control purely mental processes.McGovern Institute, Brain and cognitive sciences, School of Science, Research, Neuroscience, Mental health New method visualizes groups of neurons as they compute http://news.mit.edu/2019/flourescent-visualize-neuron-activity-1009 Fluorescent probe could allow scientists to watch circuits within the brain and link their activity to specific behaviors. Wed, 09 Oct 2019 12:59:59 -0400 Anne Trafton | MIT News Office http://news.mit.edu/2019/flourescent-visualize-neuron-activity-1009 <p>Using a fluorescent probe that lights up when brain cells are electrically active, MIT and Boston University researchers have shown that they can image the activity of many neurons at once, in the brains of mice.</p> <p>This technique, which can be performed using a simple light microscope, could allow neuroscientists to visualize the activity of circuits within the brain and link them to specific behaviors, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and a professor of biological engineering and of brain and cognitive sciences at MIT.</p> <p>“If you want to study a behavior, or a disease, you need to image the activity of populations of neurons because they work together in a network,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.</p> <p>Using this voltage-sensing molecule, the researchers showed that they could record electrical activity from many more neurons than has been possible with any existing, fully genetically encoded, fluorescent voltage probe.</p> <p>Boyden and Xue Han, an associate professor of biomedical engineering at Boston University, are the senior authors of <a href="https://www.nature.com/articles/s41586-019-1641-1" target="_blank">the study</a>, which appears in the Oct. 9 online edition of <em>Nature</em>. The lead authors of the paper are MIT postdoc Kiryl Piatkevich, BU graduate student Seth Bensussen, and BU research scientist Hua-an Tseng.</p> <p><strong>Seeing connections</strong></p> <p>Neurons compute using rapid electrical impulses, which underlie our thoughts, behavior, and perception of the world. Traditional methods for measuring this electrical activity require inserting an electrode into the brain, a process that is labor-intensive and usually allows researchers to record from only one neuron at a time. Multielectrode arrays allow the monitoring of electrical activity from many neurons at once, but they don’t sample densely enough to get all the neurons within a given volume.&nbsp; Calcium imaging does allow such dense sampling, but it measures calcium, an indirect and slow measure of neural electrical activity.</p> <p>In 2018, Boyden’s team developed an <a href="http://news.mit.edu/2018/seeing-brains-electrical-activity-0226">alternative way</a> to monitor electrical activity by labeling neurons with a fluorescent probe. Using a technique known as directed protein evolution, his group engineered a molecule called Archon1 that can be genetically inserted into neurons, where it becomes embedded in the cell membrane. When a neuron’s electrical activity increases, the molecule becomes brighter, and this fluorescence can be seen with a standard light microscope.</p> <p>In the 2018 paper, Boyden and his colleagues showed that they could use the molecule to image electrical activity in the brains of transparent worms and zebrafish embryos, and also in mouse brain slices. In the new study, they wanted to try to use it in living, awake mice as they engaged in a specific behavior.</p> <p>To do that, the researchers had to modify the probe so that it would go to a subregion of the neuron membrane. They found that when the molecule inserts itself throughout the entire cell membrane, the resulting images are blurry because the axons and dendrites that extend from neurons also fluoresce. To overcome that, the researchers attached a small peptide that guides the probe specifically to membranes of the cell bodies of neurons. They called this modified protein SomArchon.</p> <p>“With SomArchon, you can see each cell as a distinct sphere,” Boyden says. “Rather than having one cell’s light blurring all its neighbors, each cell can speak by itself loudly and clearly, uncontaminated by its neighbors.”</p> <p>The researchers used this probe to image activity in a part of the brain called the striatum, which is involved in planning movement, as mice ran on a ball. They were able to monitor activity in several neurons simultaneously and correlate each one’s activity with the mice’s movement. Some neurons’ activity went up when the mice were running, some went down, and others showed no significant change.</p> <p>“Over the years, my lab has tried many different versions of voltage sensors, and none of them have worked in living mammalian brains until this one,” Han says.</p> <p>Using this fluorescent probe, the researchers were able to obtain measurements similar to those recorded by an electrical probe, which can pick up activity on a very rapid timescale. This makes the measurements more informative than existing techniques such as imaging calcium, which neuroscientists often use as a proxy for electrical activity.</p> <p>“We want to record electrical activity on a millisecond timescale,” Han says. “The timescale and activity patterns that we get from calcium imaging are very different. We really don’t know exactly how these calcium changes are related to electrical dynamics.”</p> <p>With the new voltage sensor, it is also possible to measure very small fluctuations in activity that occur even when a neuron is not firing a spike. This could help neuroscientists study how small fluctuations impact a neuron’s overall behavior, which has previously been very difficult in living brains, Han says.</p> <p>The study “introduces a new and powerful genetic tool” for imaging voltage in the brains of awake mice, says Adam Cohen, a professor of chemistry, chemical biology, and physics at Harvard University.</p> <p>“Previously, researchers had to impale neurons with fine glass capillaries to make electrical recordings, and it was only possible to record from one or two cells at a time.&nbsp;The Boyden team recorded from about 10 cells at a time. That’s a lot of cells,” says Cohen, who was not involved in the research. “These tools open new possibilities to study the statistical structure of neural activity.&nbsp;But a mouse brain contains about 75 million neurons, so we still have a long way to go.”</p> <p><strong>Mapping circuits</strong></p> <p>The researchers also showed that this imaging technique can be combined with <a href="http://news.mit.edu/2010/brain-control-0107">optogenetics</a> — a technique developed by the Boyden lab and collaborators that allows researchers to turn neurons on and off with light by engineering them to express light-sensitive proteins. In this case, the researchers activated certain neurons with light and then measured the resulting electrical activity in these neurons.</p> <p>This imaging technology could also be combined with <a href="http://news.mit.edu/2019/mapping-brain-high-resolution-0117">expansion microscopy</a>, a technique that Boyden’s lab developed to expand brain tissue before imaging it, make it easier to see the anatomical connections between neurons in high resolution.</p> <p>“One of my dream experiments is to image all the activity in a brain, and then use expansion microscopy to find the wiring between those neurons,” Boyden says. “Then can we predict how neural computations emerge from the wiring.”</p> <p>Such wiring diagrams could allow researchers to pinpoint circuit abnormalities that underlie brain disorders, and may also help researchers to design artificial intelligence that more closely mimics the human brain, Boyden says.</p> <p>The MIT portion of the research was funded by Edward and Kay Poitras, the National Institutes of Health, including a Director’s Pioneer Award, Charles Hieken, John Doerr, the National Science Foundation, the HHMI-Simons Faculty Scholars Program, the Human Frontier Science Program, and the U.S. Army Research Office.</p> In the top row, neurons are labeled with a fluorescent probe that reveals electrical activity. In the bottom row, neurons are labeled with a variant of the probe that accumulates specifically in the neuron cell bodies, preventing interference from axons of neighboring neurons. Image courtesy of the researchersResearch, Brain and cognitive sciences, Media Lab, Biological engineering, McGovern Institute, Koch Institute, School of Engineering, School of Science, School of Architecture and Planning, National Institutes of Health (NIH), National Science Foundation (NSF), Neuroscience Alzheimer’s plaque emerges early and deep in the brain http://news.mit.edu/2019/study-pinpoints-early-alzheimers-plaque-emergence-1008 Clumps of amyloid protein emerge early in deep regions, such as the mammillary body, and march outward in the brain along specific circuits. Tue, 08 Oct 2019 12:00:01 -0400 David Orenstein | Picower Institute http://news.mit.edu/2019/study-pinpoints-early-alzheimers-plaque-emergence-1008 <p>Long before symptoms like memory loss even emerge, the underlying pathology of Alzheimer’s disease, such as an accumulation of amyloid protein plaques, is well underway in the brain. A longtime goal of the field has been to understand where it starts so that future interventions could begin there. A new study by MIT neuroscientists at The Picower Institute for Learning and Memory could help those efforts by pinpointing the regions with the earliest emergence of amyloid in the brain of a prominent mouse model of the disease. Notably, the study also shows that the degree of amyloid accumulation in one of those same regions of the human brain correlates strongly with the progression of the disease.</p> <div class="cms-placeholder-content-video"></div> <p>“Alzheimer’s is a neurodegenerative disease, so in the end you can see a lot of neuron loss,” says Wen-Chin “Brian” Huang, co-lead author of the study and a postdoc in the lab of co-senior author Li-Huei Tsai, Picower Professor of Neuroscience and director of the Picower Institute. “At that point, it would be hard to cure the symptoms. It’s really critical to understand what circuits and regions show neuronal dysfunction early in the disease. This will, in turn, facilitate the development of effective therapeutics.”</p> <p>In addition to Huang, the study’s co-lead authors are Rebecca Canter, a former member of the Tsai lab, and Heejin Choi, a former member of the lab of co-senior author Kwanghun Chung, associate professor of chemical engineering and a member of the Picower Institute and the MIT Institute for Medical Engineering and Science.</p> <p><strong>Tracking plaques</strong></p> <p>Many research groups have made progress in recent years by tracing amyloid’s path in the brain using technologies such as positron emission tomography, and by looking at brains post-mortem, but the new study in <em>Communications Biology </em>adds substantial new evidence from the 5XFAD mouse model because it presents an unbiased look at the entire brain as early as one month of age. The study reveals that amyloid begins its terrible march in deep brain regions such as the mammillary body, the lateral septum, and the subiculum before making its way along specific brain circuits that ultimately lead it to the hippocampus, a key region for memory, and the cortex, a key region for cognition.</p> <p>The team used SWITCH, a technology developed by Chung, to label amyloid plaques and to clarify the whole brains of 5XFAD mice so that they could be imaged in fine detail at different ages. The team was consistently able to see that plaques first emerged in the deep brain structures and then tracked along circuits, such as the Papez memory circuit, to spread throughout the brain by six-12 months (a mouse’s lifespan is up to three years).</p> <p>The findings help to cement an understanding that has been harder to obtain from human brains, Huang says, because post-mortem dissection cannot easily account for how the disease developed over time and PET scans don’t offer the kind of resolution the new study provides from the mice.</p> <p><strong>Key validations</strong></p> <p>Importantly, the team directly validated a key prediction of their mouse findings in human tissue: If the mammillary body is indeed a very early place that amyloid plaques emerge, then the density of those plaques should increase in proportion with how far advanced the disease is. Sure enough, when the team used SWITCH to examine the mammillary bodies of post-mortem human brains at different stages of the disease, they saw exactly that relationship: The later the stage, the more densely plaque-packed the mammillary body was.</p> <p>“This suggests that human brain alterations in Alzheimer’s disease look similar to what we observe in mouse,” the authors wrote. “Thus we propose that amyloid-beta deposits start in susceptible subcortical structures and spread to increasingly complex memory and cognitive networks with age.”</p> <p>The team also performed experiments to determine whether the accumulation of plaques they observed were of real disease-related consequence for neurons in affected regions. One of the hallmarks of Alzheimer’s disease is a vicious cycle in which amyloid makes neurons too easily excited, and overexcitement causes neurons to produce more amyloid. The team measured the excitability of neurons in the mammillary body of 5XFAD mice and found they were more excitable than otherwise similar mice that did not harbor the 5XFAD set of genetic alterations.</p> <p>In a preview of a potential future therapeutic strategy, when the researchers used a genetic approach to silence the neurons in the mammillary body of some 5XFAD mice but left neurons in others unaffected, the mice with silenced neurons produced less amyloid.</p> <p>While the study findings help explain much about how amyloid spreads in the brain over space and time, they also raise new questions, Huang said. How might the mammillary body affect memory, and what types of cells are most affected there?</p> <p>“This study sets a stage for further investigation of how dysfunction in these brain regions and circuits contributes to the symptoms of Alzheimer’s disease,” he says.</p> <p>In addition to Huang, Canter, Choi, Tsai, and Chung, the paper’s other authors are Jun Wang, Lauren Ashley Watson, Christine Yao, Fatema Abdurrob, Stephanie Bousleiman, Jennie Young, David Bennett and Ivana Dellalle.</p> <p>The National Institutes of Health, the JPB Foundation, Norman B. Leventhal and Barbara Weedon fellowships, The Burroughs Wellcome Fund, the Searle Scholars Program, a Packard Award, a NARSAD Young Investigator Award, and the NCSOFT Cultural Foundation funded the research.</p> A white-stained cluster of amyloid plaque proteins, a hallmark of Alzheimer's disease pathology, is evident in the mammillary body of a 2-month-old Alzheimer's model mouse. A new study finds that plaques begin in such deep regions and spread throughout the brain along specific circuits.Image: Picower InstitutePicower Institute for Learning and Memory, School of Science, School of Engineering, Neuroscience, Alzheimer's, Institute for Medical Engineering and Science (IMES), Brain and cognitive sciences, Disease, Mental health, Research Study: Better sleep habits lead to better college grades http://news.mit.edu/2019/better-sleep-better-grades-1001 Data on MIT students underscore the importance of getting enough sleep; bedtime also matters. Tue, 01 Oct 2019 05:00:00 -0400 David L. Chandler | MIT News Office http://news.mit.edu/2019/better-sleep-better-grades-1001 <p>Two MIT professors have found a strong relationship between students’ grades and how much sleep they’re getting. What time students go to bed and the consistency of their sleep habits also make a big difference. And no, getting a good night’s sleep just before a big test is not good enough — it takes several nights in a row of good sleep to make a difference.</p> <p>Those are among the conclusions from an experiment in which 100 students in an MIT engineering class were given Fitbits, the popular wrist-worn devices that track a person’s activity 24/7, in exchange for the researchers’ access to a semester’s worth of their activity data. The findings — some unsurprising, but some quite unexpected — are reported today in the journal <em>Science of Learning </em>in a paper by MIT postdoc Kana Okano, professors Jeffrey Grossman and John Gabrieli, and two others.</p> <p>One of the surprises was that individuals who went to bed after some particular threshold time — for these students, that tended to be 2 a.m., but it varied from one person to another — tended to perform less well on their tests no matter how much total sleep they ended up getting.</p> <p>The study didn’t start out as research on sleep at all. Instead, Grossman was trying to find a correlation between physical exercise and the academic performance of students in his class 3.091 (Introduction to Solid-State Chemistry). In addition to having 100 of the students wear Fitbits for the semester, he also enrolled about one-fourth of them in an intense fitness class in MIT’s Department of Athletics, Physical Education, and Recreation, with the help of assistant professors Carrie Moore and Matthew Breen, who created the class specifically for this study. The thinking was that there might be measurable differences in test performance between the two groups.</p> <p>There wasn’t. Those without the fitness classes performed just as well as those who did take them. “What we found at the end of the day was zero correlation with fitness, which I must say was disappointing since I believed, and still believe, there is a tremendous positive impact of exercise on cognitive performance,” Grossman says.</p> <p>He speculates that the intervals between the fitness program and the classes may have been too long to show an effect. But meanwhile, in the vast amount of data collected during the semester, some other correlations did become obvious. While the devices weren’t explicitly monitoring sleep, the Fitbit program’s proprietary algorithms did detect periods of sleep and changes in sleep quality, primarily based on lack of activity.</p> <p>These correlations were not at all subtle, Grossman says. There was essentially a straight-line relationship between the average amount of sleep a student got and their grades on the 11 quizzes, three midterms, and final exam, with the grades ranging from A’s to C’s. “There’s lots of scatter, it’s a noisy plot, but it’s a straight line,” he says. The fact that there was a correlation between sleep and performance wasn’t surprising, but the extent of it was, he says. Of course, this correlation can’t absolutely prove that sleep was the determining factor in the students’ performance, as opposed to some other influence that might have affected both sleep and grades. But the results are a strong indication, Grossman says, that sleep “really, really matters.”</p> <p>“Of course, we knew already that more sleep would be beneficial to classroom performance, from a number of previous studies that relied on subjective measures like self-report surveys,” Grossman says. “But in this study the benefits of sleep are correlated to performance in the context of a real-life college course, and driven by large amounts of objective data collection.”</p> <p>The study also revealed no improvement in scores for those who made sure to get a good night’s sleep right before a big test. According to the data, “the night before doesn’t matter,” Grossman says. “We've heard the phrase ‘Get a good night’s sleep, you've got a big day tomorrow.’ It turns out this does not correlate at all with test performance. Instead, it’s the sleep you get during the days when learning is happening that matter most.”</p> <p>Another surprising finding is that there appears to be a certain cutoff for bedtimes, such that going to bed later results in poorer performance, even if the total amount of sleep is the same. “When you go to bed matters,” Grossman says. “If you get a certain amount of sleep&nbsp; — let’s say seven hours — no matter when you get that sleep, as long as it’s before certain times, say you go to bed at 10, or at 12, or at 1, your performance is the same. But if you go to bed after 2, your performance starts to go down even if you get the same seven hours. So, quantity isn’t everything.”</p> <p>Quality of sleep also mattered, not just quantity. For example, those who got relatively consistent amounts of sleep each night did better than those who had greater variations from one night to the next, even if they ended up with the same average amount.</p> <p>This research also helped to provide an explanation for something that Grossman says he had noticed and wondered about for years, which is that on average, the women in his class have consistently gotten better grades than the men. Now, he has a possible answer: The data show that the differences in quantity and quality of sleep can fully account for the differences in grades. “If we correct for sleep, men and women do the same in class. So sleep could be the explanation for the gender difference in our class,” he says.</p> <p>More research will be needed to understand the reasons why women tend to have better sleep habits than men. “There are so many factors out there that it could be,” Grossman says. “I can envision a lot of exciting follow-on studies to try to understand this result more deeply.”</p> <p>“The results of this study are very gratifying to me as a sleep researcher, but are terrifying to me as a parent,” says Robert Stickgold, a professor of psychiatry and director of the Center for Sleep and Cognition at Harvard Medical School, who was not connected with this study. He adds, “The overall course grades for students averaging six and a half hours of sleep were down 50 percent from other students who averaged just one hour more sleep. Similarly, those who had just a half-hour more night-to-night variation in their total sleep time had grades that dropped 45 percent below others with less variation. This is huge!”</p> <p>Stickgold says “a full quarter of the variation in grades was explained by these sleep parameters (including bedtime). All students need to not only be aware of these results, but to understand their implication for success in college. I can’t help but believe the same is true for high school students.” But he adds one caution: “That said, correlation is not the same as causation. While I have no doubt that less and more variable sleep will hurt a student’s grades, it’s also possible that doing poorly in classes leads to less and more variable sleep, not the other way around, or that some third factor, such as ADHD, could independently lead to poorer grades and poorer sleep.”</p> <p>The team also included technical assistant Jakub Kaezmarzyk and Harvard Business School researcher Neha Dave. The study was supported by MIT’s Department of Materials Science and Engineering, the Lubin Fund, and the MIT Integrated Learning Initiative.</p> Even relatively small differences in the duration, timing, and consistency of students' sleep may have significant effects on course test results, a new MIT study shows. Research, DMSE, Brain and cognitive sciences, Health, School of Engineering, School of Science, Mental health, McGovern Institute, Students, Student life, education, Education, teaching, academics Josh Tenenbaum receives 2019 MacArthur Fellowship http://news.mit.edu/2019/josh-tenenbaum-macarthur-fellowship-0925 Brain and cognitive sciences professor studies how the human mind is able to learn so rapidly. Wed, 25 Sep 2019 07:00:00 -0400 Anne Trafton | MIT News Office http://news.mit.edu/2019/josh-tenenbaum-macarthur-fellowship-0925 <p>Josh Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences who studies human cognition, has been named a recipient of a 2019 MacArthur Fellowship.</p> <p>The fellowships, often referred to as “genius grants,” come with a five-year, $625,000 prize, which recipients are free to use as they see fit.</p> <p>“It’s an amazing honor, and very unexpected. There are a very small number of cognitive scientists who have ever received it, so it’s an incredible honor to be in their company,” says Tenenbaum, a professor of computational cognitive science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds and Machines (CBMM).</p> <p>Using computer modeling and behavioral experiments, Tenenbaum seeks to understand a key aspect of human intelligence: how people are able to rapidly learn new concepts and tasks based on very little information. This phenomenon is particularly noticeable in babies and young children, who can quickly learn meanings of new words, or how objects behave in the physical world, after minimal exposure to them.</p> <p>“One thing we’re trying to understand is how are these basic ways of understanding the world built, in very young children? What are babies born with? How do children really learn and how can we describe those ideas in engineering terms?” Tenenbaum says.</p> <p>Additionally, his lab explores how the mind performs cognitive processes such as making predictions about future events, inferring the mental states of other people, making judgments regarding cause and effect, and constructing theories about rules that govern physical interactions or social behavior.</p> <p>Tenenbaum says he would like to use the grant money to fund some of the more creative student projects in his lab, which are harder to get funding for, as well as collaborations with MIT colleagues that he sees as key partners in studying various aspects of cognition. He also hopes to use some of the funding to support his department’s efforts to increase research participation of under-represented minority students.</p> <p>Tenenbaum also studies machine learning and artificial intelligence, with the goal of bringing machine-learning algorithms closer to the capacities of human learning. This could lead to more powerful AI systems as well as more powerful theoretical paradigms for understanding human cognition.</p> <p>Tenenbaum received his PhD from MIT in 1999, and after a brief postdoc with the MIT AI Lab he joined the Stanford University faculty as an assistant professor of psychology. He returned to MIT as a faculty member in 2002. Last year, he was named a scientific director of The Core, a part of MIT’s Quest for Intelligence that focuses on advancing the science and engineering of both human and machine intelligence.</p> <p>Including Tenenbaum, 24 MIT faculty members and three staff members have won the MacArthur fellowship.</p> <p>MIT faculty who have won the award over the last decade include health care economist Amy Finkelstein and media studies scholar Lisa Parks (2018); computer scientist Regina Barzilay (2017); economist Heidi Williams (2015); computer scientist Dina Kitabi and astrophysicist Sara Seager (2013); writer Junot Diaz (2012); physicist Nergis Mavalvala (2010); and economist Esther Duflo (2009).</p> Josh TenenbaumImage: Lilly Paquette, MITFaculty, Brain and cognitive sciences, Behavior, Memory, Computer Science and Artificial Intelligence Laboratory (CSAIL), Center for Brains Minds and Machines, School of Science, Awards, honors and fellowships Study finds hub linking movement and motivation in the brain http://news.mit.edu/2019/study-finds-hub-linking-movement-and-motivation-brain-0919 Detailed observations in the lateral septum indicate region processes movement and reward information to help direct behavior. Thu, 19 Sep 2019 12:50:01 -0400 David Orenstein | Picower Institute for Learning and Memory http://news.mit.edu/2019/study-finds-hub-linking-movement-and-motivation-brain-0919 <p>Our everyday lives rely on planned movement through the environment to achieve goals. A new study by MIT neuroscientists at the Picower Institute for Learning and Memory at MIT identifies a well-connected brain region as a crucial link between circuits guiding goal-directed movement and motivated behavior.</p> <p>Published Sept. 19 in <em>Current Biology</em>, the research shows that the lateral septum (LS), a region considered integral to modulating behavior and implicated in many psychiatric disorders, directly encodes information about the speed and acceleration of an animal as it navigates and learns how to obtain a reward in an environment.</p> <p>“Completing a simple task, such as acquiring food for dinner, requires the participation and coordination of a large number of regions of the brain, and the weighing of a number of factors: for example, how much effort is it to get food from the fridge versus a restaurant,” says&nbsp;Hannah Wirtshafter PhD '19, the study’s lead author. “We have discovered that the LS may be aiding you in making some of those decisions. That the LS represents place, movement, and motivational information may enable the LS to help you integrate or optimize performance across considerations of place, speed, and other environmental signals.”</p> <p>Previous research has attributed important behavioral functions to the LS, such as modulating anxiety, aggression, and affect. It is also believed to be involved in addiction, psychosis, depression, and anxiety. Neuroscientists have traced its connections to the hippocampus, a crucial center for encoding spatial memories and associating them with context, and to the ventral tegmental area (VTA), a region that mediates goal-directed behaviors via the neurotransmitter dopamine. But until now, no one had shown that the LS directly tracks movement or communicated with the hippocampus, for instance by synchronizing to certain neural rhythms, about movement and the spatial context of reward.</p> <p>“The hippocampus is one of the most studied regions of the brain due to its involvement in memory, spatial navigation, and a large number of illnesses such as Alzheimer’s disease,” says&nbsp;Wirtshafter, who recently earned her PhD working on the research as a graduate student in the lab of senior author Matthew Wilson, Sherman Fairchild Professor of Neurobiology. “Comparatively little is known about the lateral septum, even though it receives a large amount of information from the hippocampus and is connected to multiple areas involved in motivation and movement.”</p> <p>Wilson says the study helps to illuminate the importance of the LS as a crossroads of movement and motivation information between regions such as the hippocampus and the VTA.</p> <p>“The discovery that activity in the LS is controlled by movement points to a link between movement and dopaminergic control through the LS that that could be relevant to memory, cognition, and disease,” he says.</p> <p><strong>Tracking thoughts</strong></p> <p>Wirtshafter was able to directly observe the interactions between the LS and the hippocampus by simultaneously recording the electrical spiking activity of hundreds of neurons in each region in rats both as they sought a reward in a T-shaped maze, and as they became conditioned to associate light and sound cues with a reward in an open box environment.</p> <p>In that data, she and Wilson observed a speed and acceleration spiking code in the dorsal area of the LS, and saw clear signs that an overlapping population of neurons were processing information based on signals from the hippocampus, including spiking activity locked to hippocampal brain rhythms, location-dependent firing in the T-maze, and cue and reward responses during the conditioning task. Those observations suggested to the researchers that the septum may serve as a point of convergence of information about movement and spatial context.</p> <p>Wirtshafter’s measurements also showed that coordination of LS spiking with the hippocampal theta rhythm is selectively enhanced during choice behavior that relies on spatial working memory, suggesting that the LS may be a key relay of information about choice outcome during navigation.</p> <p><strong>Putting movement in context</strong></p> <p>Overall, the findings suggest that movement-related signaling in the LS, combined with the input that it receives from the hippocampus, may allow the LS to contribute to an animal’s awareness of its own position in space, as well as its ability to evaluate task-relevant changes in context arising from the animal’s movement, such as when it has reached a choice point, Wilson and Wirtshafter said.</p> <p>This also suggests that the reported ability of the LS to modulate affect and behavior may result from its ability to evaluate how internal states change during movement, and the consequences and outcomes of these changes. For instance, the LS may contribute to directing movement toward or away from the location of a positive or negative stimulus.</p> <p>The new study therefore offers new perspectives on the role of the lateral septum in directed behavior, the researchers added, and given the known associations of the LS with some disorders, it may also offer new implications for broader understanding of the mechanisms relating mood, motivation, and movement, and the neuropsychiatric basis of mental illnesses.</p> <p>“Understanding how the LS functions in movement and motivation will aid us in understanding how the brain makes basic decisions, and how disruption in these processed might lead to different disorders,” Wirtshafter says.</p> <p>A National Defense Science and Engineering Graduate Fellowship and the JPB Foundation funded the research.</p> An MIT study is the first to show that a brain region called the lateral septum directly encodes movement information such as speed. Image: Hannah Wirtshafter/Picower Institute for Learning and MemoryPicower Institute, School of Science, Biology, Neuroscience, Behavior, Alumni/ae, Brain and cognitive sciences, Research