MIT News - Brain and cognitive sciences - Neuroscience MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community. en Tue, 10 Mar 2020 00:00:00 -0400 How the brain encodes landmarks that help us navigate Neuroscientists discover how a key brain region combines visual and spatial information to help us find our way. Tue, 10 Mar 2020 00:00:00 -0400 Anne Trafton | MIT News Office <p>When we move through the streets of our neighborhood, we often use familiar landmarks to help us navigate. And as we think to ourselves, “OK, now make a left at the coffee shop,” a part of the brain called the retrosplenial cortex (RSC) lights up.</p> <p>While many studies have linked this brain region with landmark-based navigation, exactly how it helps us find our way is not well-understood. A new study from MIT neuroscientists now reveals how neurons in the RSC use both visual and spatial information to encode specific landmarks.</p> <p>“There’s a synthesis of some of these signals — visual inputs and body motion — to represent concepts like landmarks,” says Mark Harnett, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “What we went after in this study is the neuron-level and population-level representation of these different aspects of spatial navigation.”</p> <p>In a study of mice, the researchers found that this brain region creates a “landmark code” by combining visual information about the surrounding environment with spatial feedback of the mice’s own position along a track. Integrating these two sources of information allowed the mice to learn where to find a reward, based on landmarks that they saw.</p> <p>“We believe that this code that we found, which is really locked to the landmarks, and also gives the animals a way to discriminate between landmarks, contributes to the animals’ ability to use those landmarks to find rewards,” says Lukas Fischer, an MIT postdoc and the lead author of the study.</p> <p>Harnett is the senior author of the study, which appears today in the journal <em>eLife</em>. Other authors are graduate student Raul Mojica Soto-Albors and recent MIT graduate Friederike Buck.</p> <p><strong>Encoding landmarks</strong></p> <p>Previous studies have found that people with damage to the RSC have trouble finding their way from one place to another, even though they can still recognize their surroundings. The RSC is also one of the first areas affected in Alzheimer’s patients, who often have trouble navigating.</p> <p>The RSC is wedged between the primary visual cortex and the motor cortex, and it receives input from both of those areas. It also appears to be involved in combining two types of representations of space — allocentric, meaning the relationship of objects to each other, and egocentric, meaning the relationship of objects to the viewer.</p> <p>“The evidence suggests that RSC is really a place where you have a fusion of these different frames of reference,” Harnett says. “Things look different when I move around in the room, but that’s because my vantage point has changed. They’re not changing with respect to one another.”</p> <p>In this study, the MIT team set out to analyze the behavior of individual RSC neurons in mice, including how they integrate multiple inputs that help with navigation. To do that, they created a virtual reality environment for the mice by allowing them to run on a treadmill while they watch a video screen that makes it appear they are running along a track. The speed of the video is determined by how fast the mice run.</p> <p>At specific points along the track, landmarks appear, signaling that there’s a reward available a certain distance beyond the landmark. The mice had to learn to distinguish between two different landmarks, and to learn how far beyond each one they had to run to get the reward.</p> <p>Once the mice learned the task, the researchers recorded neural activity in the RSC as the animals ran along the virtual track. They were able to record from a few hundred neurons at a time, and found that most of them anchored their activity to a specific aspect of the task.</p> <p>There were three primary anchoring points: the beginning of the trial, the landmark, and the reward point. The majority of the neurons were anchored to the landmarks, meaning that their activity would consistently peak at a specific point relative to the landmark, say 50 centimeters before it or 20 centimeters after it.</p> <p>Most of those neurons responded to both of the landmarks, but a small subset responded to only one or the other. The researchers hypothesize that those strongly selective neurons help the mice to distinguish between the landmarks and run the correct distance to get the reward.</p> <p>When the researchers used optogenetics (a tool that can turn off neuron activity) to block activity in the RSC, the mice’s performance on the task became much worse.</p> <p><strong>Combining inputs</strong></p> <p>The researchers also did an experiment in which the mice could choose to run or not while the video played at a constant speed, unrelated to the mice’s movement. The mice could still see the landmarks, but the location of the landmarks was no longer linked to a reward or to the animals’ own behavior. In that situation, RSC neurons did respond to the landmarks, but not as strongly as they did when the mice were using them for navigation.</p> <p>Further experiments allowed the researchers to tease out just how much neuron activation is produced by visual input (seeing the landmarks) and by feedback on the mouse’s own movement. However, simply adding those two numbers yielded totals much lower than the neuron activity seen when the mice were actively navigating the track.</p> <p>“We believe that is evidence for a mechanism of nonlinear integration of these inputs, where they get combined in a way that creates a larger response than what you would get if you just added up those two inputs in a linear fashion,” Fischer says.</p> <p>The researchers now plan to analyze data that they have already collected on how neuron activity evolves over time as the mice learn the task. They also hope to perform further experiments in which they could try to separately measure visual and spatial inputs into different locations within RSC neurons.</p> <p>The research was funded by the National Institutes of Health, the McGovern Institute, the NEC Corporation Fund for Research in Computers and Communications at MIT, and the Klingenstein-Simons Fellowship in Neuroscience.</p> MIT neuroscientists have identified a “landmark code” that helps the brain navigate our surroundings.Image: Christine Daniloff, MITResearch, Brain and cognitive sciences, McGovern Institute, School of Science, Neuroscience, National Institutes of Health (NIH) A new model of vision Computer model of face processing could reveal how the brain produces richly detailed visual representations so quickly. Wed, 04 Mar 2020 14:00:00 -0500 Anne Trafton | MIT News Office <p>When we open our eyes, we immediately see our surroundings in great detail. How the brain is able to form these richly detailed representations of the world so quickly is one of the biggest unsolved puzzles in the study of vision.</p> <p>Scientists who study the brain have tried to replicate this phenomenon using computer models of vision, but so far, leading models only perform much simpler tasks such as picking out an object or a face against a cluttered background. Now, a team led by MIT cognitive scientists has produced a computer model that captures the human visual system’s ability to quickly generate a detailed scene description from an image, and offers some insight into how the brain achieves this.</p> <p>“What we were trying to do in this work is to explain how perception can be so much richer than just attaching semantic labels on parts of an image, and to explore the question of how do we see all of the physical world,” says Josh Tenenbaum, a professor of computational cognitive science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM).</p> <p>The new model posits that when the brain receives visual input, it quickly performs a series of computations that reverse the steps that a computer graphics program would use to generate a 2D representation of a face or other object. This type of model, known as efficient inverse graphics (EIG), also correlates well with electrical recordings from face-selective regions in the brains of nonhuman primates, suggesting that the primate visual system may be organized in much the same way as the computer model, the researchers say.</p> <p>Ilker Yildirim, a former MIT postdoc who is now an assistant professor of psychology at Yale University, is the lead author of the paper, which appears today in <em>Science Advances</em>. Tenenbaum and Winrich Freiwald, a professor of neurosciences and behavior at Rockefeller University, are the senior authors of the study. Mario Belledonne, a graduate student at Yale, is also an author.</p> <p><strong>Inverse graphics</strong></p> <p>Decades of research on the brain’s visual system has studied, in great detail, how light input onto the retina is transformed into cohesive scenes. This understanding has helped artificial intelligence researchers develop computer models that can replicate aspects of this system, such as recognizing faces or other objects.</p> <p>“Vision is the functional aspect of the brain that we understand the best, in humans and other animals,” Tenenbaum says. “And computer vision is one of the most successful areas of AI at this point. We take for granted that machines can now look at pictures and recognize faces very well, and detect other kinds of objects.”</p> <p>However, even these sophisticated artificial intelligence systems don’t come close to what the human visual system can do, Yildirim says.</p> <p>“Our brains don’t just detect that there’s an object over there, or recognize and put a label on something,” he says. “We see all of the shapes, the geometry, the surfaces, the textures. We see a very rich world.”</p> <p>More than a century ago, the physician, physicist, and philosopher Hermann von Helmholtz theorized that the brain creates these rich representations by reversing the process of image formation. He hypothesized that the visual system includes an image generator that would be used, for example, to produce the faces that we see during dreams. Running this generator in reverse would allow the brain to work backward from the image and infer what kind of face or other object would produce that image, the researchers say.</p> <p>However, the question remained: How could the brain perform this process, known as inverse graphics, so quickly? Computer scientists have tried to create algorithms that could perform this feat, but the best previous systems require many cycles of iterative processing, taking much longer than the 100 to 200 milliseconds the brain requires to create a detailed visual representation of what you’re seeing. Neuroscientists believe perception in the brain can proceed so quickly because it is implemented in a mostly feedforward pass through several hierarchically organized layers of neural processing.</p> <p>The MIT-led team set out to build a special kind of deep neural network model to show how a neural hierarchy can quickly infer the underlying features of a scene — in this case, a specific face. In contrast to the standard deep neural networks used in computer vision, which are trained from labeled data indicating the class of an object in the image, the researchers’ network is trained from a model that reflects the brain’s internal representations of what scenes with faces can look like.</p> <p>Their model thus learns to reverse the steps performed by a computer graphics program for generating faces. These graphics programs begin with a three-dimensional representation of an individual face and then convert it into a two-dimensional image, as seen from a particular viewpoint. These images can be placed on an arbitrary background image. The researchers theorize that the brain’s visual system may do something similar when you dream or conjure a mental image of someone’s face.</p> <p>The researchers trained their deep neural network to perform these steps in reverse — that is, it begins with the 2D image and then adds features such as texture, curvature, and lighting, to create what the researchers call a “2.5D” representation. These 2.5D images specify the shape and color of the face from a particular viewpoint. Those are then converted into 3D representations, which don’t depend on the viewpoint.</p> <p>“The model gives a systems-level account of the processing of faces in the brain, allowing it to see an image and ultimately arrive at a 3D object, which includes representations of shape and texture, through this important intermediate stage of a 2.5D image,” Yildirim says.</p> <p><strong>Model performance</strong></p> <p>The researchers found that their model is consistent with data obtained by studying certain regions in the brains of macaque monkeys. In a study published in 2010, Freiwald and Doris Tsao of Caltech recorded the activity of neurons in those regions and analyzed how they responded to 25 different faces, seen from seven different viewpoints. That study revealed three stages of higher-level face processing, which the MIT team now hypothesizes correspond to three stages of their inverse graphics model: roughly, a 2.5D viewpoint-dependent stage; a stage that bridges from 2.5 to 3D; and a 3D, viewpoint-invariant stage of face representation.</p> <p>“What we show is that both the quantitative and qualitative response properties of those three levels of the brain seem to fit remarkably well with the top three levels of the network that we’ve built,” Tenenbaum says.</p> <p>The researchers also compared the model’s performance to that of humans in a task that involves recognizing faces from different viewpoints. This task becomes harder when researchers alter the faces by removing the face’s texture while preserving its shape, or distorting the shape while preserving relative texture. The new model’s performance was much more similar to that of humans than computer models used in state-of-the-art face-recognition software, additional evidence that this model may be closer to mimicking what happens in the human visual system.</p> <p>“This work is exciting because it introduces interpretable stages of intermediate representation into a feedforward neural network model of face recognition,” says Nikolaus Kriegeskorte, a professor of psychology and neuroscience at Columbia University, who was not involved in the research. “Their approach merges the classical idea that vision inverts a model of how the image was generated, with modern deep feedforward networks. It’s very interesting that this model better explains neural representations and behavioral responses.”</p> <p>The researchers now plan to continue testing the modeling approach on additional images, including objects that aren’t faces, to investigate whether inverse graphics might also explain how the brain perceives other kinds of scenes. In addition, they believe that adapting this approach to computer vision could lead to better-performing AI systems.</p> <p>“If we can show evidence that these models might correspond to how the brain works, this work could lead computer vision researchers to take more seriously and invest more engineering resources in this inverse graphics approach to perception,” Tenenbaum says. “The brain is still the gold standard for any kind of machine that sees the world richly and quickly.”</p> <p>The research was funded by the Center for Brains, Minds, and Machines at MIT, the National Science Foundation, the National Eye Institute, the Office of Naval Research, the New York Stem Cell Foundation, the Toyota Research Institute, and Mitsubishi Electric.</p> MIT cognitive scientists have developed a computer model of face recognition that performs a series of computations that reverse the steps that a computer graphics program would use to generate a 2D representation of a face.Image: courtesy of the researchersResearch, Computer vision, Brain and cognitive sciences, Center for Brains Minds and Machines, Computer Science and Artificial Intelligence Laboratory (CSAIL), School of Science, School of Engineering, National Science Foundation (NSF), Artificial intelligence, Machine learning, Neuroscience Empowering faculty partnerships across the globe MISTI Global Seed Funds program has delivered $22 million to faculty since 2008. Tue, 03 Mar 2020 12:20:01 -0500 MISTI <p>MIT faculty share their creative and technical talent on campus as well as across the globe, compounding the Institute’s impact through strong international partnerships. Thanks to the MIT Global Seed Funds (GSF) program, managed by the MIT International Science and Technology Initiatives (<a href="" target="_blank">MISTI</a>), more of these faculty members will be able to build on these relationships to develop ideas and create new projects.</p> <p>“This MISTI fund was extremely helpful in consolidating our collaboration and has been the start of a long-term interaction between the two teams,” says 2017 GSF awardee Mehrdad Jazayeri, associate professor of brain and cognitive sciences and investigator at the McGovern Institute for Brain Research. “We have already submitted multiple abstracts to conferences together, mapped out several ongoing projects, and secured international funding thanks to the preliminary progress this seed fund enabled.”</p> <p>This year, the 28 funds that comprise MISTI GSF received 232 MIT applications. Over $2.3 million was awarded to 107 projects from 23 departments across the entire Institute. This brings the amount awarded to $22 million over the 12-year life of the program. Besides supporting faculty, these funds also provide meaningful educational opportunities for students. The majority of GSF teams include students from MIT and international collaborators, bolstering both their research portfolios and global experience.</p> <p>“This project has had important impact on my grad student’s education and development. She was able to apply techniques she has learned to a new and challenging system, mentor an international student, participate in a major international meeting, and visit CEA,” says Professor of Chemistry Elizabeth Nolan, a 2017 GSF awardee.</p> <p>On top of these academic and research goals, students are actively broadening their cultural experience and scope. “The environment at CEA differs enormously from MIT because it is a national lab and because lab structure and graduate education in France is markedly different than at MIT,” Nolan continues. “At CEA, she had the opportunity to present research to distinguished international colleagues.”</p> <p>These impactful partnerships unite faculty teams behind common goals to tackle worldwide challenges, helping to develop solutions that would not be possible without international collaboration. 2017 GSF winner Emilio Bizzi, professor emeritus of brain and cognitive sciences and emeritus investigator at the McGovern Institute, articulated the advantage of combining these individual skills within a high-level team. “The collaboration among researchers was valuable in sharing knowledge, experience, skills and techniques … as well as offering the probability of future development of systems to aid in rehabilitation of patients suffering TBI.”</p> <p>The research opportunities that grow from these seed funds often lead to published papers and additional funding leveraged from early results. The next call for proposals will be in mid-May.</p> <p>MISTI creates applied international learning opportunities for MIT students that increase their ability to understand and address real-world problems. MISTI collaborates with partners at MIT and beyond, serving as a vital nexus of international activity and bolstering the Institute’s research mission by promoting collaborations between MIT faculty members and their counterparts abroad.</p> Left to right: The Machu Picchu Design Heritage project is a past Global Seed Fund recipient. Paloma Gonzalez, Takehiko Nagakura, Chang Liu, and Wenzhe Peng pose with a panoramic view of Machu Picchu in Peru. They are part of an MIT team that has worked to digitally document the site.Photo: MISTIMISTI, McGovern Institute, Brain and cognitive sciences, School of Humanities Arts and Social Sciences, Research, Faculty, Funding, Global, Center for International Studies The neural basis of sensory hypersensitivity A new study may explain why people with autism are often highly sensitive to light and noise. Mon, 02 Mar 2020 11:00:00 -0500 Anne Trafton | MIT News Office <p>Many people with autism spectrum disorders are highly sensitive to light, noise, and other sensory input. A new study in mice reveals a neural circuit that appears to underlie this hypersensitivity, offering a possible strategy for developing new treatments.</p> <p>MIT and Brown University neuroscientists found that mice lacking a protein called Shank3, which has been previously linked with autism, were more sensitive to a touch on their whiskers than genetically normal mice. These Shank3-deficient mice also had overactive excitatory neurons in a region of the brain called the somatosensory cortex, which the researchers believe accounts for their over-reactivity.</p> <p>There are currently no treatments for sensory hypersensitivity, but the researchers believe that uncovering the cellular basis of this sensitivity may help scientists to develop potential treatments.</p> <p>“We hope our studies can point us to the right direction for the next generation of treatment development,” says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research.</p> <p>Feng and Christopher Moore, a professor of neuroscience at Brown University, are the senior authors of the paper, which appears today in <em>Nature Neuroscience</em>. McGovern Institute research scientist Qian Chen and Brown postdoc Christopher Deister are the lead authors of the study.</p> <p><strong>Too much excitation</strong></p> <p>The Shank3 protein is important for the function of synapses — connections that allow neurons to communicate with each other. Feng has previously shown that mice lacking the Shank3 gene display many <a href="">traits associated with autism</a>, including avoidance of social interaction, and compulsive, repetitive behavior.</p> <p>In the new study, Feng and his colleagues set out to study whether these mice also show sensory hypersensitivity. For mice, one of the most important sources of sensory input is the whiskers, which help them to navigate and to maintain their balance, among other functions.</p> <p>The researchers developed a way to measure the mice’s sensitivity to slight deflections of their whiskers, and then trained the mutant Shank3 mice and normal (“wild-type”) mice to display behaviors that signaled when they felt a touch to their whiskers. They found that mice that were missing Shank3 accurately reported very slight deflections that were not noticed by the normal mice.</p> <p>“They are very sensitive to weak sensory input, which barely can be detected by wild-type mice,” Feng says. “That is a direct indication that they have sensory over-reactivity.”</p> <p>Once they had established that the mutant mice experienced sensory hypersensitivity, the researchers set out to analyze the underlying neural activity. To do that, they used an <a href="">imaging technique</a> that can measure calcium levels, which indicate neural activity, in specific cell types.</p> <p>They found that when the mice’s whiskers were touched, excitatory neurons in the somatosensory cortex were overactive. This was somewhat surprising because when Shank3 is missing, synaptic activity should drop. That led the researchers to hypothesize that the root of the problem was low levels of Shank3 in the inhibitory neurons that normally turn down the activity of excitatory neurons. Under that hypothesis, diminishing those inhibitory neurons’ activity would allow excitatory neurons to go unchecked, leading to sensory hypersensitivity.</p> <p>To test this idea, the researchers genetically engineered mice so that they could turn off Shank3 expression exclusively in inhibitory neurons of the somatosensory cortex. As they had suspected, they found that in these mice, excitatory neurons were overactive, even though those neurons had normal levels of Shank3.</p> <p>“If you only delete Shank3 in the inhibitory neurons in the somatosensory cortex, and the rest of the brain and the body is normal, you see a similar phenomenon where you have hyperactive excitatory neurons and increased sensory sensitivity in these mice,” Feng says.</p> <p><strong>Reversing hypersensitivity</strong></p> <p>The results suggest that reestablishing normal levels of neuron activity could reverse this kind of hypersensitivity, Feng says.</p> <p>“That gives us a cellular target for how in the future we could potentially modulate the inhibitory neuron activity level, which might be beneficial to correct this sensory abnormality,” he says.</p> <p>Many other studies in mice have linked defects in inhibitory neurons to neurological disorders, including Fragile X syndrome and Rett syndrome, as well as autism.</p> <p>“Our study is one of several that provide a direct and causative link between inhibitory defects and sensory abnormality, in this model at least,” Feng says. “It provides further evidence to support inhibitory neuron defects as one of the key mechanisms in models of autism spectrum disorders.”</p> <p>He now plans to study the timing of when these impairments arise during an animal’s development, which could help to guide the development of possible treatments. There are existing drugs that can turn down excitatory neurons, but these drugs have a sedative effect if used throughout the brain, so more targeted treatments could be a better option, Feng says.</p> <p>“We don’t have a clear target yet, but we have a clear cellular phenomenon to help guide us,” he says. “We are still far away from developing a treatment, but we’re happy that we have identified defects that point in which direction we should go.”</p> <p>The research was funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, the Nancy Lurie Marks Family Foundation, the Poitras Center for Psychiatric Disorders Research at the McGovern Institute, the Varanasi Family, R. Buxton, and the National Institutes of Health.</p> MIT neuroscientists have discovered a brain circuit that appears to contribute to the sensory hypersensitivity often seen in people with autism spectrum disorders.Image: Jose-Luis Olivares, MITResearch, Autism, Brain and cognitive sciences, McGovern Institute, Neuroscience, School of Science, National Institutes of Health (NIH) Demystifying the world of deep networks Researchers discover that no magic is required to explain why deep networks generalize despite going against statistical intuition. Fri, 28 Feb 2020 14:40:01 -0500 Kris Brewer | Center for Brains, Minds and Machines <p>Introductory statistics courses teach us that, when fitting a model to some data, we should have more data than free parameters to avoid the danger of overfitting — fitting noisy data too closely, and thereby failing to fit new data. It is surprising, then, that in modern deep learning the practice is to have orders of magnitude more parameters than data. Despite this, deep networks show good predictive performance, and in fact do better the more parameters they have. Why would that be?</p> <p>It has been known for some time that good performance in machine learning comes from controlling the complexity of networks, which is not just a simple function of the number of free parameters. The complexity of a classifier, such as a neural network, depends on measuring the “size” of the space of functions that this network represents, with multiple technical measures previously suggested: Vapnik–Chervonenkis dimension, covering numbers, or Rademacher complexity, to name a few. Complexity, as measured by these notions, can be controlled during the learning process by imposing a constraint on the norm of the parameters — in short, on how “big” they can get. The surprising fact is that no such explicit constraint seems to be needed in training deep networks. Does deep learning lie outside of the classical learning theory? Do we need to rethink the foundations?</p> <p>In a new <em>Nature Communications</em> paper, “Complexity Control by Gradient Descent in Deep Networks,” a team from the Center for Brains, Minds, and Machines led by Director Tomaso Poggio, the Eugene McDermott Professor in the MIT Department of Brain and Cognitive Sciences, has shed some light on this puzzle by addressing the most practical and successful applications of modern deep learning: classification problems.</p> <p>“For classification problems, we observe that in fact the parameters of the model do not seem to converge, but rather grow in size indefinitely during gradient descent. However, in classification problems only the normalized parameters matter — i.e., the direction they define, not their size,” says co-author and MIT PhD candidate Qianli Liao. “The not-so-obvious thing we showed is that the commonly used gradient descent on the unnormalized parameters induces the desired complexity control on the normalized ones.”</p> <p>“We have known for some time in the case of regression for shallow linear networks, such as kernel machines, that iterations of gradient descent provide an implicit, vanishing regularization effect,” Poggio says. “In fact, in this simple case we probably know that we get the best-behaving maximum-margin, minimum-norm solution. The question we asked ourselves, then, was: Can something similar happen for deep networks?”</p> <p>The researchers found that it does. As co-author and MIT postdoc Andrzej Banburski explains, “Understanding convergence in deep networks shows that there are clear directions for improving our algorithms. In fact, we have already seen hints that controlling the rate at which these unnormalized parameters diverge allows us to find better performing solutions and find them faster.”</p> <p>What does this mean for machine learning? There is no magic behind deep networks. The same theory behind all linear models is at play here as well. This work suggests ways to improve deep networks, making them more accurate and faster to train.</p> MIT researchers (left to right) Qianli Liao, Tomaso Poggio, and Andrzej Banburski stand with their equations. Image: Kris BrewerCenter for Brains Minds and Machines, Brain and cognitive sciences, Electrical engineering and computer science (EECS), Machine learning, Artificial intelligence, Research Bringing artificial intelligence into the classroom, research lab, and beyond Through the Undergraduate Research Opportunities Program, students work to build AI tools with impact. Thu, 13 Feb 2020 16:50:01 -0500 Kim Martineau | MIT Quest for Intelligence <p>Artificial intelligence is reshaping how we live, learn, and work, and this past fall, MIT undergraduates got to explore and build on some of the tools coming out of research labs at MIT. Through the&nbsp;<a href="" target="_blank">Undergraduate Research Opportunities Program</a>&nbsp;(UROP), students worked with researchers at the MIT Quest for Intelligence and elsewhere on projects to improve AI literacy and K-12 education, understand face recognition and how the brain forms new memories, and speed up tedious tasks like cataloging new library material. Six projects are featured below.</p> <p><strong>Programming Jibo to forge an emotional bond with kids</strong></p> <p>Nicole Thumma met her first robot when she was 5, at a museum.&nbsp;“It was incredible that I could have a conversation, even a simple conversation, with this machine,” she says. “It made me think&nbsp;robots&nbsp;are&nbsp;the most complicated manmade thing, which made me want to learn more about them.”</p> <p>Now a senior at MIT, Thumma spent last fall writing dialogue for the social robot Jibo, the brainchild of&nbsp;<a href="">MIT Media Lab</a> Associate Professor&nbsp;<a href="">Cynthia Breazeal</a>. In a UROP project co-advised by Breazeal and researcher&nbsp;<a href="">Hae Won Park</a>, Thumma scripted mood-appropriate dialogue to help Jibo bond with students while playing learning exercises together.</p> <p>Because emotions are complicated, Thumma riffed on a set of basic feelings in her dialogue — happy/sad, energized/tired, curious/bored. If Jibo was feeling sad, but energetic and curious, she might program it to say, “I'm feeling blue today, but something that always cheers me up is talking with my friends, so I'm glad I'm playing with you.​” A tired, sad, and bored Jibo might say, with a tilt of its head, “I don't feel very good. It's like my wires are all mixed up today. I think this activity will help me feel better.”&nbsp;</p> <p>In these brief interactions, Jibo models its vulnerable side and teaches kids how to express their emotions. At the end of an interaction, kids can give Jibo a virtual token to pick up its mood or energy level. “They can see what impact they have on others,” says Thumma. In all, she wrote 80 lines of dialogue, an experience that led to her to stay on at MIT for an MEng in robotics. The Jibos she helped build are now in kindergarten classrooms in Georgia, offering emotional and intellectual support as they read stories and play word games with their human companions.</p> <p><strong>Understanding why familiar faces stand out</strong></p> <p>With a quick glance, the faces of friends and acquaintances jump out from those of strangers. How does the brain do it?&nbsp;<a href="">Nancy Kanwisher</a>’s lab in the&nbsp;<a href="">Department of Brain and Cognitive Sciences</a> (BCS) is building computational models to understand the face-recognition process.&nbsp;<a href="">Two key findings</a>: the brain starts to register the gender and age of a face before recognizing its identity, and that face perception is more robust for familiar faces.</p> <p>This fall, second-year student Joanne Yuan worked with postdoc&nbsp;<a href="">Katharina Dobs</a>&nbsp;to understand&nbsp;why this is so.&nbsp;In earlier experiments, subjects were shown multiple photographs of familiar faces of American celebrities and unfamiliar faces of German celebrities while their brain activity was measured with magnetoencephalography. Dobs found that subjects processed age and gender before the celebrities’ identity regardless of whether the face was familiar. But they were much better at unpacking the gender and identity of faces they knew, like Scarlett Johansson, for example. Dobs suggests that the improved gender and identity recognition for familiar faces is due to a feed-forward mechanism rather than top-down retrieval of information from memory.&nbsp;</p> <p>Yuan has explored both hypotheses with a type of model, convolutional neural networks (CNNs), now widely used in face-recognition tools. She trained a CNN on the face images and studied its layers to understand its processing steps. She found that the model, like Dobs’ human subjects, appeared to process gender and age before identity, suggesting that both CNNs and the brain are primed for face recognition in similar ways. In another experiment, Yuan trained two CNNs on familiar and unfamiliar faces and found that the CNNs, again like humans, were better at identifying the familiar faces.</p> <p>Yuan says she enjoyed exploring two fields — machine learning and neuroscience — while gaining an appreciation for the simple act of recognizing faces. “It’s pretty complicated and there’s so much more to learn,” she says.</p> <p><strong>Exploring memory formation</strong></p> <p>Protruding from the branching dendrites of brain cells are microscopic nubs that grow and change shape as memories form. Improved imaging techniques have allowed researchers to move closer to these nubs, or spines, deep in the brain to learn more about their role in creating and consolidating memories.</p> <p><a href="">Susumu Tonegawa</a>, the Picower Professor of Biology and Neuroscience, has&nbsp;pioneered a technique for labeling clusters of brain cells, called “engram cells,” that are linked to specific memories in mice. Through conditioning, researchers train a mouse, for example, to recognize an environment. By tracking the evolution of dendritic spines in cells linked to a single memory trace, before and after the learning episode, researchers can estimate where memories may be physically stored.&nbsp;</p> <p>But it takes time. Hand-labeling spines in a stack of 100 images can take hours — more, if the researcher needs to consult images from previous days to verify that a spine-like nub really is one, says&nbsp;Timothy O’Connor, a software engineer in BCS helping with the project.&nbsp;With 400 images taken in a typical session, annotating the images can take longer than collecting them, he adds.</p> <p>O’Connor&nbsp;contacted the Quest&nbsp;<a href="">Bridge</a>&nbsp;to see if the process could be automated. Last fall, undergraduates Julian Viera and Peter Hart began work with Bridge AI engineer Katherine Gallagher to train a neural network to automatically pick out the spines. Because spines vary widely in shape and size, teaching the computer what to look for is one big challenge facing the team as the work continues. If successful, the tool could be useful to a hundred other labs across the country.</p> <p>“It’s exciting to work on a project that could have a huge amount of impact,” says Viera. “It’s also cool to be learning something new in computer science and neuroscience.”</p> <p><strong>Speeding up the archival process</strong></p> <p>Each year, Distinctive Collections at the MIT Libraries receives&nbsp;a large volume of personal letters, lecture notes, and other materials from donors inside and outside of MIT&nbsp;that tell MIT’s story and document the history of science and technology.&nbsp;Each of these unique items must be organized and described, with a typical box of material taking up to 20 hours to process and make available to users.</p> <p>To make the work go faster, Andrei Dumitrescu and Efua Akonor, undergraduates at MIT and Wellesley College respectively, are working with Quest Bridge’s Katherine Gallagher to develop an automated system for processing archival material donated to MIT. Their goal: to&nbsp;develop a machine-learning pipeline that can categorize and extract information from scanned images of the records. To accomplish this task, they turned to the U.S. Library of Congress (LOC), which has digitized much of its extensive holdings.&nbsp;</p> <p>This past fall, the students pulled images of about&nbsp;70,000 documents, including correspondence, speeches, lecture notes, photographs, and books&nbsp;housed at the LOC, and trained a classifier to distinguish a letter from, say, a speech. They are now using optical character recognition and a text-analysis tool&nbsp;to extract key details like&nbsp;the date, author, and recipient of a letter, or the date and topic of a lecture. They will soon incorporate object recognition to describe the content of a&nbsp;photograph,&nbsp;and are looking forward to&nbsp;testing&nbsp;their system on the MIT Libraries’ own digitized data.</p> <p>One&nbsp;highlight of the project was learning to use Google Cloud. “This is the real world, where there are no directions,” says Dumitrescu. “It was fun to figure things out for ourselves.”&nbsp;</p> <p><strong>Inspiring the next generation of robot engineers</strong></p> <p>From smartphones to smart speakers, a growing number of devices live in the background of our daily lives, hoovering up data. What we lose in privacy we gain in time-saving personalized recommendations and services. It’s one of AI’s defining tradeoffs that kids should understand, says third-year student Pablo&nbsp;Alejo-Aguirre.&nbsp;“AI brings us&nbsp;beautiful and&nbsp;elegant solutions, but it also has its limitations and biases,” he says.</p> <p>Last year, Alejo-Aguirre worked on an AI literacy project co-advised by Cynthia Breazeal and graduate student&nbsp;<a href="">Randi Williams</a>. In collaboration with the nonprofit&nbsp;<a href="">i2 Learning</a>, Breazeal’s lab has developed an AI curriculum around a robot named Gizmo that teaches kids how to&nbsp;<a href="">train their own robot</a>&nbsp;with an Arduino micro-controller and a user interface based on Scratch-X, a drag-and-drop programming language for children.&nbsp;</p> <p>To make Gizmo accessible for third-graders, Alejo-Aguirre developed specialized programming blocks that give the robot simple commands like, “turn left for one second,” or “move forward for one second.” He added Bluetooth to control Gizmo remotely and simplified its assembly, replacing screws with acrylic plates that slide and click into place. He also gave kids the choice of rabbit and frog-themed Gizmo faces.&nbsp;“The new design is a lot sleeker and cleaner, and the edges are more kid-friendly,” he says.&nbsp;</p> <p>After building and testing several prototypes, Alejo-Aguirre and Williams demoed their creation last summer at a robotics camp. This past fall, Alejo-Aguirre manufactured 100 robots that are now in two schools in Boston and a third in western Massachusetts.&nbsp;“I’m proud of the technical breakthroughs I made through designing, programming, and building the robot, but I’m equally proud of the knowledge that will be shared through this curriculum,” he says.</p> <p><strong>Predicting stock prices with machine learning</strong></p> <p>In search of a practical machine-learning application to learn more about the field, sophomores Dolapo Adedokun and Daniel Adebi hit on stock picking. “We all know buy, sell, or hold,” says Adedokun. “We wanted to find an easy challenge that anyone could relate to, and develop a guide for how to use machine learning in that context.”</p> <p>The two friends approached the Quest Bridge with their own idea for a UROP project after they were turned away by several labs because of their limited programming experience, says Adedokun. Bridge engineer Katherine Gallagher, however, was willing to take on novices. “We’re building machine-learning tools for non-AI specialists,” she says. “I was curious to see how Daniel and Dolapo would approach the problem and reason through the questions they encountered.”</p> <p>Adebi wanted to learn more about reinforcement learning, the trial-and-error AI technique that has allowed computers to surpass humans at chess, Go, and a growing list of video games. So, he and Adedokun worked with Gallagher to structure an experiment to see how reinforcement learning would fare against another AI technique, supervised learning, in predicting stock prices.</p> <p>In reinforcement learning, an agent is turned loose in an unstructured environment with one objective: to maximize a specific outcome (in this case, profits) without being told explicitly how to do so. Supervised learning, by contrast, uses labeled data to accomplish a goal, much like a problem set with the correct answers included.</p> <p>Adedokun and Adebi trained both models on seven years of stock-price data, from 2010-17, for Amazon, Microsoft, and Google. They then compared profits generated by the reinforcement learning model and a trading algorithm based on the supervised model’s price predictions for the following 18 months; they found that their reinforcement learning model produced higher returns.</p> <p>They developed a Jupyter notebook to share what they learned and explain how they built and tested their models. “It was a valuable exercise for all of us,” says Gallagher. “Daniel and Dolapo got hands-on experience with machine-learning fundamentals, and I got insight into the types of obstacles users with their background might face when trying to use the tools we’re building at the Bridge.”</p> Students participating in MIT Quest for Intelligence-funded UROP projects include: (clockwise from top left) Nicole Thumma, Joanne Yuan, Julian Viera, Andrei Dumitrescu, Pablo Alejo-Aguirre, and Dolapo Adedokun.Photo panel: Samantha SmileyQuest for Intelligence, Brain and cognitive sciences, Media Lab, Libraries, School of Engineering, School of Science, Artifical intelligence, Algorithms, Computer science and technology, Machine learning, Undergraduate Research Opportunities Program (UROP), Students, Undergraduate, Electrical engineering and computer science (EECS) Bridging the gap between human and machine vision Researchers develop a more robust machine-vision architecture by studying how human vision responds to changing viewpoints of objects. Tue, 11 Feb 2020 16:40:01 -0500 Kris Brewer | Center for Brains, Minds and Machines <p>Suppose you look briefly from a few feet away at a person you have never met before. Step back a few paces and look again. Will you be able to recognize her face? “Yes, of course,” you probably are thinking. If this is true, it would mean that our visual system, having seen a single image of an object such as a specific face, recognizes it robustly despite changes to the object’s position and scale, for example. On the other hand, we know that state-of-the-art classifiers, such as vanilla deep networks, will fail this simple test.</p> <p>In order to recognize a specific face under a range of transformations, neural networks need to be trained with many examples of the face under the different conditions. In other words, they can achieve invariance through memorization, but cannot do it if only one image is available. Thus, understanding how human vision can pull off this remarkable feat is relevant for engineers aiming to improve their existing classifiers. It also is important for neuroscientists modeling the primate visual system with deep networks. In particular, it is possible that the invariance with one-shot learning exhibited by biological vision requires a rather different computational strategy than that of deep networks.&nbsp;</p> <p>A new paper by MIT PhD candidate in electrical engineering and computer science Yena Han and colleagues in <em>Nature Scientific Reports</em> entitled “Scale and translation-invariance for novel objects in human vision” discusses how they study this phenomenon more carefully to create novel biologically inspired networks.</p> <p>"Humans can learn from very few examples, unlike deep networks. This is a huge difference with vast implications for engineering of vision systems and for understanding how human vision really works," states co-author Tomaso Poggio — director of the Center for Brains, Minds and Machines (CBMM) and the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT. "A key reason for this difference is the relative invariance of the primate visual system to scale, shift, and other transformations. Strangely, this has been mostly neglected in the AI community, in part because the psychophysical data were so far less than clear-cut. Han's work has now established solid measurements of basic invariances of human vision.”</p> <p>To differentiate invariance rising from intrinsic computation with that from experience and memorization, the new study measured the range of invariance in one-shot learning. A one-shot learning task was performed by presenting Korean letter stimuli to human subjects who were unfamiliar with the language. These letters were initially presented a single time under one specific condition and tested at different scales or positions than the original condition. The first experimental result is that — just as you guessed — humans showed significant scale-invariant recognition after only a single exposure to these novel objects. The second result is that the range of position-invariance is limited, depending on the size and placement of objects.</p> <p>Next, Han and her colleagues performed a comparable experiment in deep neural networks designed to reproduce this human performance. The results suggest that to explain invariant recognition of objects by humans, neural network models should explicitly incorporate built-in scale-invariance. In addition, limited position-invariance of human vision is better replicated in the network by having the model neurons’ receptive fields increase as they are further from the center of the visual field. This architecture is different from commonly used neural network models, where an image is processed under uniform resolution with the same shared filters.</p> <p>“Our work provides a new understanding of the brain representation of objects under different viewpoints. It also has implications for AI, as the results provide new insights into what is a good architectural design for deep neural networks,” remarks Han, CBMM researcher and lead author of the study.</p> <p>Han and Poggio were joined by Gemma Roig and Gad Geiger in the work.</p> Yena Han (left) and Tomaso Poggio stand with an example of the visual stimuli used in a new psychophysics study.Photo: Kris BrewerCenter for Brains Minds and Machines, Brain and cognitive sciences, Machine learning, Artificial intelligence, Computer vision, Research, School of Science, Computer science and technology, Electrical Engineering & Computer Science (eecs), School of Engineering Brainstorming energy-saving hacks on Satori, MIT’s new supercomputer Three-day hackathon explores methods for making artificial intelligence faster and more sustainable. Tue, 11 Feb 2020 11:50:01 -0500 Kim Martineau | MIT Quest for Intelligence <p>Mohammad Haft-Javaherian planned to spend an hour at the&nbsp;<a href="">Green AI Hackathon</a>&nbsp;— just long enough to get acquainted with MIT’s new supercomputer,&nbsp;<a href="">Satori</a>. Three days later, he walked away with $1,000 for his winning strategy to shrink the carbon footprint of artificial intelligence models trained to detect heart disease.&nbsp;</p> <p>“I never thought about the kilowatt-hours I was using,” he says. “But this hackathon gave me a chance to look at my carbon footprint and find ways to trade a small amount of model accuracy for big energy savings.”&nbsp;</p> <p>Haft-Javaherian was among six teams to earn prizes at a hackathon co-sponsored by the&nbsp;<a href="">MIT Research Computing Project</a>&nbsp;and&nbsp;<a href="">MIT-IBM Watson AI Lab</a> Jan. 28-30. The event was meant to familiarize students with Satori, the computing cluster IBM&nbsp;<a href="">donated</a> to MIT last year, and to inspire new techniques for building energy-efficient AI models that put less planet-warming carbon dioxide into the air.&nbsp;</p> <p>The event was also a celebration of Satori’s green-computing credentials. With an architecture designed to minimize the transfer of data, among other energy-saving features, Satori recently earned&nbsp;<a href="">fourth place</a>&nbsp;on the Green500 list of supercomputers. Its location gives it additional credibility: It sits on a remediated brownfield site in Holyoke, Massachusetts, now the&nbsp;<a href="">Massachusetts Green High Performance Computing Center</a>, which runs largely on low-carbon hydro, wind and nuclear power.</p> <p>A postdoc at MIT and Harvard Medical School, Haft-Javaherian came to the hackathon to learn more about Satori. He stayed for the challenge of trying to cut the energy intensity of his own work, focused on developing AI methods to screen the coronary arteries for disease. A new imaging method, optical coherence tomography, has given cardiologists a new tool for visualizing defects in the artery walls that can slow the flow of oxygenated blood to the heart. But even the experts can miss subtle patterns that computers excel at detecting.</p> <p>At the hackathon, Haft-Javaherian ran a test on his model and saw that he could cut its energy use eight-fold by reducing the time Satori’s graphics processors sat idle. He also experimented with adjusting the model’s number of layers and features, trading varying degrees of accuracy for lower energy use.&nbsp;</p> <p>A second team, Alex Andonian and Camilo Fosco, also won $1,000 by showing they could train a classification model nearly 10 times faster by optimizing their code and losing a small bit of accuracy. Graduate students in the Department of Electrical Engineering and Computer Science (EECS), Andonian and Fosco are currently training a classifier to tell legitimate videos from AI-manipulated fakes, to compete in Facebook’s&nbsp;<a href="">Deepfake Detection Challenge</a>. Facebook launched the contest last fall to crowdsource ideas for stopping the spread of misinformation on its platform ahead of the 2020 presidential election.</p> <p>If a technical solution to deepfakes is found, it will need to run on millions of machines at once, says Andonian. That makes energy efficiency key. “Every optimization we can find to train and run more efficient models will make a huge difference,” he says.</p> <p>To speed up the training process, they tried streamlining their code and lowering the resolution of their 100,000-video training set by eliminating some frames. They didn’t expect a solution in three days, but Satori’s size worked in their favor. “We were able to run 10 to 20 experiments at a time, which let us iterate on potential ideas and get results quickly,” says Andonian.&nbsp;</p> <p>As AI continues to improve at tasks like reading medical scans and interpreting video, models have grown bigger and more calculation-intensive, and thus, energy intensive. By one&nbsp;<a href="">estimate</a>, training a large language-processing model produces nearly as much carbon dioxide as the cradle-to-grave emissions from five American cars. The footprint of the typical model is modest by comparison, but as AI applications proliferate its environmental impact is growing.&nbsp;</p> <p>One way to green AI, and tame the exponential growth in demand for training AI, is to build smaller models. That’s the approach that a third hackathon competitor, EECS graduate student Jonathan Frankle, took. Frankle is looking for signals early in the training process that point to subnetworks within the larger, fully-trained network that can do the same job.&nbsp;The idea builds on his award-winning&nbsp;<a href="">Lottery Ticket Hypothesis</a>&nbsp;paper from last year that found a neural network could perform with 90 percent fewer connections if the right subnetwork was found early in training.</p> <p>The hackathon competitors were judged by John Cohn, chief scientist at the MIT-IBM Watson AI Lab, Christopher Hill, director of MIT’s Research Computing Project, and Lauren Milechin, a research software engineer at MIT.&nbsp;</p> <p>The judges recognized four&nbsp;other teams: Department of Earth, Atmospheric and Planetary Sciences (EAPS) graduate students Ali Ramadhan,&nbsp;Suyash Bire, and James Schloss,&nbsp;for adapting the programming language Julia for Satori; MIT Lincoln Laboratory postdoc Andrew Kirby, for adapting code he wrote as a graduate student to Satori using a library designed for easy programming of computing architectures; and Department of Brain and Cognitive Sciences graduate students Jenelle Feather and Kelsey Allen, for applying a technique that drastically simplifies models by cutting their number of parameters.</p> <p>IBM developers were on hand to answer questions and gather feedback.&nbsp;&nbsp;“We pushed the system — in a good way,” says Cohn. “In the end, we improved the machine, the documentation, and the tools around it.”&nbsp;</p> <p>Going forward, Satori will be joined in Holyoke by&nbsp;<a href="">TX-Gaia</a>, Lincoln Laboratory’s new supercomputer.&nbsp;Together, they will provide feedback on the energy use of their workloads. “We want to raise awareness and encourage users to find innovative ways to green-up all of their computing,” says Hill.&nbsp;</p> Several dozen students participated in the Green AI Hackathon, co-sponsored by the MIT Research Computing Project and MIT-IBM Watson AI Lab. Photo panel: Samantha SmileyQuest for Intelligence, MIT-IBM Watson AI Lab, Electrical engineering and computer science (EECS), EAPS, Lincoln Laboratory, Brain and cognitive sciences, School of Engineering, School of Science, Algorithms, Artificial intelligence, Computer science and technology, Data, Machine learning, Software, Climate change, Awards, honors and fellowships, Hackathon, Special events and guest speakers A college for the computing age With the initial organizational structure in place, the MIT Schwarzman College of Computing moves forward with implementation. Tue, 04 Feb 2020 12:30:01 -0500 Terri Park | MIT Schwarzman College of Computing <p>The mission of the MIT Stephen A. Schwarzman College of Computing is to address the opportunities and challenges of the computing age — from hardware to software to algorithms to artificial intelligence (AI) — by transforming the capabilities of academia in three key areas: supporting the rapid evolution and growth of computer science and AI; facilitating collaborations between computing and other disciplines; and focusing on social and ethical responsibilities of computing through combining technological approaches and insights from social science and humanities, and through engagement beyond academia.</p> <p>Since starting his position in August 2019, Daniel Huttenlocher, the inaugural dean of the MIT Schwarzman College of Computing, has been working with many stakeholders in designing the initial organizational structure of the college. Beginning with the <a href="" target="_blank">College of Computing Task Force Working Group reports</a> and feedback from the MIT community, the structure has been developed through an iterative process of draft plans yielding a <a href="" target="_blank">26-page document</a> outlining the initial academic organization of the college that is designed to facilitate the college mission through improved coordination and evolution of existing computing programs at MIT, improved collaboration in computing across disciplines, and development of new cross-cutting activities and programs, notably in the social and ethical responsibilities of computing.</p> <p>“The MIT Schwarzman College of Computing is both bringing together existing MIT programs in computing and developing much-needed new cross-cutting educational and research programs,” says Huttenlocher. “For existing programs, the college helps facilitate coordination and manage the growth in areas such as computer science, artificial intelligence, data systems and society, and operations research, as well as helping strengthen interdisciplinary computing programs such as computational science and engineering. For new areas, the college is creating cross-cutting platforms for the study and practice of social and ethical responsibilities of computing, for multi-departmental computing education, and for incubating new interdisciplinary computing activities.”</p> <p>The following existing departments, institutes, labs, and centers are now part of the college:</p> <ul> <li>Department of Electrical Engineering and Computer (EECS), which has been <a href="" target="_self">reorganized</a> into three overlapping sub-units of electrical engineering (EE), computer science (CS), and artificial intelligence and decision-making (AI+D), and is jointly part of the MIT Schwarzman College of Computing and School of Engineering;</li> <li>Operations Research Center (ORC), which is jointly part of the MIT Schwarzman College of Computing and MIT Sloan School of Management;</li> <li>Institute for Data, Systems, and Society (IDSS), which will be increasing its focus on the societal aspects of its mission while also continuing to support statistics across MIT, and including the Technology and Policy Program (TPP) and Sociotechnical Systems Research Center (SSRC);</li> <li>Center for Computational Science Engineering (CCSE), which is being renamed from the Center for Computational Engineering and broadening its focus in the sciences;</li> <li>Computer Science and Artificial Intelligence Laboratory (CSAIL);</li> <li>Laboratory for Information and Decision Systems (LIDS); and</li> <li>Quest for Intelligence.</li> </ul> <p>With the initial structure in place, Huttenlocher, the college leadership team, and the leaders of the academic units that are part of the college, in collaboration with departments in all five schools, are actively moving forward with curricular and programmatic development, including the launch of two new areas, the Common Ground for Computing Education and the Social and Ethical Responsibilities of Computing (SERC). Still in the early planning stages, these programs are the aspects of the college that are designed to cut across lines and involve a number of departments throughout MIT. Other programs are expected to be introduced as the college continues to take shape.</p> <p>“The college is an Institute-wide entity, working with and across all five schools,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, who was part of the task force steering committee. “Its continued growth and focus depend greatly on the input of our MIT community, a process which began over a year ago. I’m delighted that Dean Huttenlocher and the college leadership team have engaged the community for collaboration and discussion around the plans for the college.”</p> <p>With these organizational changes, students, faculty, and staff in these units are members of the college, and in some cases, jointly with a school, as will be those who are engaged in the new cross-cutting activities in SERC and Common Ground. “A question we get frequently,” says Huttenlocher, “is how to apply to the college. As is the case throughout MIT, undergraduate admissions are handled centrally, and graduate admissions are handled by each individual department or graduate program.”<strong> </strong></p> <p><strong>Advancing computing</strong></p> <p>Despite the unprecedented growth in computing, there remains substantial unmet demand for expertise. In academia, colleges and universities worldwide are faced with oversubscribed programs in computer science and the constant need to keep up with rapidly changing materials at both the graduate and undergraduate level.</p> <p>According to Huttenlocher, the computing fields are evolving at a pace today that is beyond the capabilities of current academic structures to handle. “As academics, we pride ourselves on being generators of new knowledge, but academic institutions themselves don’t change that quickly. The rise of AI is probably the biggest recent example of that, along with the fact that about 40 percent of MIT undergraduates are majoring in computer science, where we have 7 percent of the MIT faculty.”</p> <p>In order to help meet this demand, MIT is increasing its academic capacity in computing and AI with 50 new faculty positions — 25 will be core computing positions in CS, AI, and related areas, and 25 will be shared jointly with departments. Searches are now active to recruit core faculty in CS and AI+D, and for joint faculty with MIT Philosophy, the Department of Brain and Cognitive Sciences, and several interdisciplinary institutes.</p> <p>The new shared faculty searches will largely be conducted around the concept of “clusters” to build capacity at MIT in important computing areas that cut across disciplines, departments, and schools. Huttenlocher, the provost, and the five school deans will work to identify themes based on input from departments so that recruiting can be undertaken during the next academic year.</p> <p><strong>Cross-cutting collaborations in computing</strong></p> <p>Building on the history of strong faculty participation in interdepartmental labs, centers, and initiatives, the MIT Schwarzman College of Computing provides several forms of membership in the college based on cross-cutting research, teaching, or external engagement activities. While computing is affecting intellectual inquiry in almost every discipline, Huttenlocher is quick to stress that “it’s bi-directional.” He notes that existing collaborations across various schools and departments, such as MIT Digital Humanities, as well as opportunities for new such collaborations, are key to the college mission because in the same way that “computing is changing thinking in the disciplines; the disciplines are changing the way people do computing.”</p> <p>Under the leadership of Asu Ozdaglar, the deputy dean of academics and department head of EECS, the college is developing the Common Ground for Computing Education, an interdepartmental teaching collaborative that will facilitate the offering of computing classes and coordination of computing-related curricula across academic units.</p> <p>The objectives of this collaborative are to provide opportunities for faculty across departments to work together, including co-teaching classes, creating new undergraduate majors or minors such as in AI+D, as well as facilitating undergraduate blended degrees such as 6-14 (Computer Science, Economics, and Data Science), 6-9 (Computation and Cognition), 11-6 (Urban Science and Planning with Computer Science), 18-C (Mathematics with Computer Science), and others.</p> <p>“It is exciting to bring together different areas of computing with methodological and substantive commonalities as well as differences around one table,” says Ozdaglar. “MIT faculty want to collaborate in topics around computing, but they are increasingly overwhelmed with teaching assignments and other obligations. I think the college will enable the types of interactions that are needed to foster new ideas.”</p> <p>Thinking about the impact on the student experience, Ozdaglar expects that the college will help students better navigate the computing landscape at MIT by creating clearer paths. She also notes that many students have passions beyond computer science, but realize the need to be adept in computing techniques and methodologies in order to pursue other interests, whether it be political science, economics, or urban science. “The idea for the college is to educate students who are fluent in computation, but at the same time, creatively apply computing with the methods and questions of the domain they are mostly interested in.”</p> <p>For Deputy Dean of Research Daniela Rus, who is also the director of CSAIL and the Andrew and Erna Viterbi Professor in EECS, developing research programs “that bring together MIT faculty and students from different units to advance computing and to make the world better through computing” is a top priority. She points to the recent launch of the <a href="" target="_self">MIT Air Force AI Innovation Accelerator</a>, a collaboration between the MIT Schwarzman College of Computing and the U.S. Air Force focused on AI, as an example of the types of research projects the college can facilitate.</p> <p>“As humanity works to solve problems ranging from climate change to curing disease, removing inequality, ensuring sustainability, and eliminating poverty, computing opens the door to powerful new solutions,” says Rus. “And with the MIT Schwarzman College as our foundation, I believe MIT will be at the forefront of those solutions. Our scholars are laying theoretical foundations of computing and applying those foundations to big ideas in computing and across disciplines.”</p> <p><strong>Habits of mind and action</strong></p> <p>A critically important cross-cutting area is the Social and Ethical Responsibilities of Computing, which will facilitate the development of responsible “habits of mind and action” for those who create and deploy computing technologies, and the creation of technologies in the public interest.</p> <p>“The launch of the MIT Schwarzman College of Computing offers an extraordinary new opportunity for the MIT community to respond to today’s most consequential questions in ways that serve the common good,” says Melissa Nobles, professor of political science, the Kenan Sahin Dean of the MIT School of Humanities, Arts, and Social Sciences, and co-chair of the Task Force Working Group on Social Implications and Responsibilities of Computing.</p> <p>“As AI and other advanced technologies become ubiquitous in their influence and impact, touching nearly every aspect of life, we have increasingly seen the need to more consciously align powerful new technologies with core human values — integrating consideration of societal and ethical implications of new technologies into the earliest stages of their development. Asking, for example, of every new technology and tool: Who will benefit? What are the potential ecological and social costs? Will the new technology amplify or diminish human accomplishments in the realms of justice, democracy, and personal privacy?</p> <p>“As we shape the college, we are envisioning an MIT culture in which all of us are equipped and encouraged to think about such implications. In that endeavor, MIT’s humanistic disciplines will serve as deep resources for research, insight, and discernment. We also see an opportunity for advanced technologies to help solve political, economic, and social issues that trouble today’s world by integrating technology with a humanistic analysis of complex civilizational issues — among them climate change, the future of work, and poverty, issues that will yield only to collaborative problem-solving. It is not too much to say that human survival may rest on our ability to solve these problems via collective intelligence, designing approaches that call on the whole range of human knowledge.”</p> <p>Julie Shah, an associate professor in the Department of Aeronautics and Astronautics and head of the Interactive Robotics Group at CSAIL, who co-chaired the working group with Nobles and is now a member of the college leadership, adds that “traditional technologists aren’t trained to pause and envision the possible futures of how technology can and will be used. This means that we need to develop new ways of training our students and ourselves in forming new habits of mind and action so that we include these possible futures into our design.”</p> <p>The associate deans of Social and Ethical Responsibilities of Computing, Shah and David Kaiser, the Germeshausen Professor of the History of Science and professor of physics, are designing a systemic framework for SERC that will not only effect change in computing education and research at MIT, but one that will also inform policy and practice in government and industry. Activities that are currently in development include multi-disciplinary curricula embedded in traditional computing and AI courses across all levels of instruction, the commission and curation of a series of case studies that will be modular and available to all via MIT’s open access channels, active learning projects, cross-disciplinary monthly convenings, public forums, and more.&nbsp;</p> <p>“A lot of how we’ve been thinking about SERC components is building capacity with what we already have at the Institute as a very important first step. And that means how do we get people interacting in ways that can be a little bit different than what has been familiar, because I think there are a lot of shared goals among the MIT community, but the gears aren’t quite meshing yet. We want to further support collaborations that might cut across lines that otherwise might not have had much traffic between them,” notes Kaiser.</p> <p><strong>Just the beginning</strong></p> <p>While he’s excited by the progress made so far, Huttenlocher points out there will continue to be revisions made to the organizational structure of the college. “We are at the very beginning of the college, with a tremendous amount of excellence at MIT to build on, and with some clear needs and opportunities, but the landscape is changing rapidly and the college is very much a work in progress.”</p> <p>The college has other initiatives in the planning stages, such as the Center for Advanced Studies of Computing that will host fellows from inside and outside of MIT on semester- or year-long project-oriented programs in focused topic areas that could seed new research, scholarly, educational, or policy work. In addition, Huttenlocher is planning to launch a search for an assistant or associate dean of equity and inclusion, once the Institute Community and Equity Officer is in place, to focus on improving and creating programs and activities that will help broaden participation in computing classes and degree programs, increase the&nbsp;diversity&nbsp;of top faculty candidates in computing fields, and ensure that faculty search and graduate admissions processes have diverse slates of candidates and interviews.</p> <p>“The typical academic approach would be to wait until it’s clear what to do, but that would be a mistake. The way we’re going to learn is by trying and by being more flexible. That may be a more general attribute of the new era we’re living in, he says. “We don’t know what it’s going to look like years from now, but it’s going to be pretty different, and MIT is going to be shaping it.”</p> <p>The MIT Schwarzman College of Computing will be hosting a community forum on Wednesday, Feb. 12 at 2 p.m. in Room 10-250. Members from the MIT community are welcome to attend to learn more about the initial organizational structure of the college.</p> MIT Schwarzman College of Computing leadership team (left to right) David Kaiser, Daniela Rus, Dan Huttenlocher, Julie Shah, and Asu Ozdaglar Photo: Sarah BastilleMIT Schwarzman College of Computing, School of Engineering, Computer Science and Artificial Intelligence Laboratory (CSAIL), Laboratory for Information and Decision Systems (LIDS), Quest for Intelligence, Philosophy, Brain and cognitive sciences, Digital humanities, School of Humanities Arts and Social Sciences, Artificial intelligence, Operations research, Aeronautical and astronautical engineering, Electrical Engineering & Computer Science (eecs), IDSS, Ethics, Administration, Classes and programs Genetic screen offers new drug targets for Huntington’s disease Neuroscientists identify genes that modulate the disease’s toxic effects. Thu, 30 Jan 2020 10:59:59 -0500 Anne Trafton | MIT News Office <p>Using a type of genetic screen that had previously been impossible in the mammalian brain, MIT neuroscientists have identified hundreds of genes that are necessary for neuron survival. They also used the same approach to identify genes that protect against the toxic effects of a mutant protein that causes Huntington’s disease.</p> <p>These efforts yielded at least one promising drug target for Huntington’s: a family of genes that may normally help cells to break down the mutated huntingtin protein before it can aggregate and form the clumps seen in the brains of Huntington’s patients.</p> <p>“These genes had never been linked to Huntington’s disease processes before. When we saw them, that was very exciting because we found not only one gene, but actually several of the same family, and also we saw them have an effect across two models of Huntington’s disease,” says Myriam Heiman, an associate professor of neuroscience in the Department of Brain and Cognitive Sciences and the senior author of <a href="" target="_blank">the study</a>.</p> <p>The researchers’ new screening technique, which allowed them to assess all of the roughly 22,000 genes found in the mouse brain, could also be applied to other neurological disorders, including Alzheimer’s and Parkinson’s diseases, says Heiman, who is also a member of MIT’s Picower Institute for Learning and Memory and the Broad Institute of MIT and Harvard.</p> <p>Broad Institute postdoc Mary Wertz is the lead author of the paper, which appears today in <em>Neuron</em>.</p> <p><strong>Genome-wide screen</strong></p> <p>For many decades, biologists have been performing screens in which they systematically knock out individual genes in model organisms such as mice, fruit flies, and the worm <em>C. elegans</em>, then observe the effects on cell survival. However, such screens have never been done in the mouse brain. One major reason for this is that delivering the molecular machinery required for these genetic manipulations is more difficult in the brain than elsewhere in the body.</p> <p>“These unbiased genetic screens are very powerful, but the technical difficulty of doing it in the central nervous system at a genome-wide scale has never been overcome,” Heiman says.</p> <p>In recent years, researchers at the Broad Institute have developed libraries of genetic material that can be used to turn off the expression of every gene found in the mouse genome. One of these libraries is based on short hairpin RNA (shRNA), which interferes with the messenger RNA that carries a particular gene’s information. Another makes use of CRISPR, a technique that can disrupt or delete specific genes in a cell. These libraries are delivered by viruses, each of which carry one element that targets a single gene.</p> <p>The libraries were designed so that each of the approximately 22,000 mouse genes is targeted by four or five shRNAs or CRISPR components, so 80,000 to 100,000 viruses need to make it into the brain to ensure that all genes are hit at least once. The MIT team came up with a way to make their solution of viruses highly concentrated, and to inject them directly into the striatum of the brain. Using this approach, they were able to deliver one of the shRNA or CRISPR elements to about 25 percent of all of the cells in the striatum.</p> <p>The researchers focused on the striatum, which is involved in regulating motor control, cognition, and emotion, because it is the brain region most affected by Huntington’s disease. It is also involved in Parkinson’s disease, as well as autism and drug addiction.</p> <p>About seven months after the injection, the researchers sequenced all of the genomic DNA in the targeted striatal neurons. Their approach is based on the idea that if particular genes are necessary for neurons’ survival, any cell with those genes knocked out will die. Then, those shRNAs or CRISPR elements will be found at lower rates in the total population of cells.</p> <p>The study turned up many genes that are necessary for any cell to survive, such as enzymes involved in cell metabolism or copying DNA into RNA. The findings also revealed genes that had been identified in previous studies of fruit flies and worms as being important for neuron function, such as genes involved the function of synapses (structures that allow neurons to communicate with each other).</p> <p>However, a novel finding of this study was the identification of genes that hadn’t been linked to neuron survival before, Heiman says. Many of those were genes that code for metabolic proteins that are essential in cells that burn a lot of energy.</p> <p>“What we interpret this to mean is that neurons in the mammalian brain are much more metabolically active and have a much higher dependency on these processes than for example, a neuron in <em>C. elegans</em>,” Heiman says.</p> <p>William Yang, a professor of psychiatry and biobehavioral sciences at the University of California at Los Angeles, calls the new screening technique “a giant leap forward” for the field of brain research.</p> <p>“Prior to this, people really could study the molecular function of genes gene-by-gene, or maybe a few genes at a time. This is a groundbreaking study because it demonstrates that you can perform genome-wide genetic screening in the mammalian central nervous system,” says Yang, who was not involved in the study.</p> <p><strong>Promising targets</strong></p> <p>The researchers then performed the same type of screen on two different mouse models of Huntington’s disease. These mouse models express the mutated form of the huntingtin protein, which forms clumps in the brains of Huntington’s patients. In this case, the researchers compared the results from the screen of the Huntington’s mice to normal mice. If any of the shRNA or CRISPR elements were found less frequently in the Huntington’s mice, that would suggest that those elements targeted genes that are helping to make cells more resistant to the toxic effects of the huntingtin protein, Heiman says.</p> <p>One promising drug target that emerged from this screen is the Nme gene family, which has previously been linked to cancer metastasis, but not Huntington’s disease. The MIT team found that one of these genes, Nme1, regulates the expression of other genes that are involved in the proper disposal of proteins. The researchers hypothesize that without Nme1, these genes don’t get turned on as highly, allowing huntingtin to accumulate in the brain. They also showed that when Nme1 is overexpressed in the mouse models of Huntington’s, the Huntington’s symptoms appear to improve.</p> <p>Although this gene hasn’t been linked to Huntington’s before, there have already been some efforts to develop compounds that target it, for use in treating cancer, Heiman says.</p> <p>“This is very exciting to us because it’s theoretically a druggable compound,” she says. “If we can increase its activity with a small molecule, perhaps we can replicate the effect of genetic overexpression.”</p> <p>The research was funded by the National Institutes of Health/National Institute of Neurological Disorders and Stroke, the JPB Foundation, the Bev Hartig Huntington’s Disease Foundation, a Fay/Frank Seed Award from the Brain Research Foundation, the Jeptha H. and Emily V. Wade Award, and the Hereditary Disease Foundation.</p> A genome-wide analysis has revealed genes that are essential for neuron survival, as well as genes that protect against the effects of Huntington’s disease.Image courtesy of NIH, edited by MIT NewsResearch, Brain and cognitive sciences, Picower Institute, Broad Institute, School of Science, National Institutes of Health (NIH), Genetics, CRISPR, Neuroscience, Disease, Drug development With these neurons, extinguishing fear is its own reward The same neurons responsible for encoding reward also form new memories to suppress fearful ones. Tue, 21 Jan 2020 12:40:01 -0500 David Orenstein | Picower Institute <p>When you expect a really bad experience to happen and then it doesn’t, it’s a distinctly positive feeling. A new study of fear extinction training in mice may suggest why: The findings not only identify the exact population of brain cells that are key for learning not to feel afraid anymore, but also show that these neurons are the same ones that help encode feelings of reward.</p> <p>The study, published Jan. 14 in <em>Neuron</em> by scientists at MIT’s Picower Institute for Learning and Memory, specifically shows that fear extinction memories and feelings of reward alike are stored by neurons that express the gene Ppp1r1b in the posterior of the basolateral amygdala (pBLA), a region known to assign associations of aversive or rewarding feelings, or “valence,” with memories. The study was conducted by Xiangyu Zhang, an MIT graduate student, Joshua Kim, a former graduate student, and Susumu Tonegawa, professor of biology and neuroscience at RIKEN-MIT Laboratory of Neural Circuit Genetics at the Picower Institute for Learning and Memory at MIT and Howard Hughes Medical Institute.</p> <p>“We constantly live at the balance of positive and negative emotion,” Tonegawa says. “We need to have very strong memories of dangerous circumstances in order to avoid similar circumstances to recur. But if we are constantly feeling threatened we can become depressed. You need a way to bring your emotional state back to something more positive.”</p> <p><strong>Overriding fear with reward</strong></p> <p>In a prior study, Kim showed that Ppp1r1b-expressing neurons encode rewarding valence and compete with distinct Rspo2-expressing neurons in the BLA that encode negative valence. In the new study, Zhang, Kim, and Tonegawa set out to determine whether this competitive balance also underlies fear and its extinction.</p> <p>In fear extinction, an original fearful memory is thought to be essentially overwritten by a new memory that is not fearful. In the study, for instance, mice were exposed to little shocks in a chamber, making them freeze due to the formation of fearful memory. But the next day, when the mice were returned to the same chamber for a longer period of time without any further little shocks, freezing gradually dissipated; hence, this treatment is called fear extinction training. The fundamental question, then, is whether the fearful memory is lost or just suppressed by the formation of a new memory during the fear extinction training.</p> <p>While the mice underwent fear extinction training the scientists watched the activity of the different neural populations in the BLA. They saw that Ppp1r1b cells were more active and Rspo2 cells were less active in mice that experienced fear extinction. They also saw that while Rspo2 cells were mostly activated by the shocks and were inhibited during fear extinction, Ppp1r1b cells were mostly active during extinction memory training and retrieval, but were inhibited during the shocks.</p> <p>These and other experiments suggested to the authors that the hypothetical fear extinction memory may be formed in the Ppp1r1b neuronal population, and the team went on to demonstrate this vigorously. For this, they employed the technique previously pioneered in their lab for the identification and manipulation of the neuronal population that holds specific memory information, memory “engram” cells. Zhang labeled Ppp1r1b neurons that were activated during retrieval of fear extinction memory with the light-sensitive protein channelrhodopsin. When these neurons were activated by blue laser light during a second round of fear extinction training, it enhanced and accelerated the extinction. Moreover, when the engram cells were inhibited by another optogenetic technique, fear extinction was impaired because the Ppp1r1b engram neurons could no longer suppress the Rspo2 fear neurons. That allowed the fear memory to regain primacy.</p> <p>These data met the fundamental criteria for the existence of engram cells for fear extinction memory within the pBLA Ppp1r1b cell population: activation and reactivation by recall and enduring and off-line maintenance of the acquired extinction memory.</p> <p>Because Kim had previously shown Ppp1r1b neurons are activated by rewards and drive appetitive behavior and memory, the team sequentially tracked Ppp1r1b cell activity in mice that eagerly received a water reward followed by a food reward followed by fear extinction training and fear extinction memory retrieval. The overlap of Ppp1r1b neurons activated by fear extinction versus water reward was as high as the overlap of neurons activated by water versus food reward. And finally, artificial optogenetic activation of Ppp1r1b extinction memory engram cells was as effective as optogenetic activation of Ppp1r1b water reward-activated neurons in driving appetitive behaviors. Reciprocally, artificial optogenetic activation of water-responding Ppp1r1b neurons enhanced fear extinction training as efficiently as optogenetic activation of fear extinction memory engram cells. These results demonstrate that fear extinction is equivalent to bona fide rewards and therefore provide the neuroscientific basis for the widely held experience in daily life: omission of expected punishment is a reward.</p> <p><strong>What next?</strong></p> <p>By establishing this intimate connection between fear extinction and reward, and by identifying a genetically defined neuronal population (Ppp1r1b) that plays a crucial role in fear extinction, this study provides potential therapeutic targets for treating fear disorders like post-traumatic stress disorder and anxiety, Zhang says.</p> <p>From the basic scientific point of view, Tonegawa says, how fear extinction training specifically activates Ppp1r1b neurons would be an important question to address. More imaginatively, results showing how Ppp1r1b neurons override Rspo2 neurons in fear extinction raises an intriguing question about whether a reciprocal dynamic might also occur in the brain and behavior. Investigating “joy extinction” via these mechanisms might be an interesting research topic.</p> <p>The research was supported by the RIKEN Brain Science Institute, the Howard Hughes Medical Institute, and the JPB Foundation.</p> The same neurons that store feelings of reward also store memories that suppress fearful ones, a new study shows. In this image, the broader population of Ppp1r1b neurons is labeled green while neurons storing a specific fear extinction memory are labeled red.Image: Tonegawa Lab/Picower InstitutePicower Institute, Biology, Brain and cognitive sciences, Memory, Anxiety, Neuroscience, Research, School of Science, Mental health, Learning “She” goes missing from presidential language Even when people believed Hillary Clinton would win the 2016 election, they did not use “she” to refer to the next president. Wed, 08 Jan 2020 01:00:20 -0500 Anne Trafton | MIT News Office <p>Throughout most of 2016, a significant percentage of the American public believed that the winner of the November 2016 presidential election would be a woman — Hillary Clinton.</p> <p>Strikingly, a new study from cognitive scientists and linguists at MIT, the University of Potsdam, and the University of California at San Diego shows that despite those beliefs, people rarely used the pronoun “she” when referring to the next U.S. president before the election. Furthermore, when reading about the future president, encountering the pronoun “she” caused a significant stumble in their reading.</p> <p>“There seemed to be a real bias against referring to the next president as ‘she.’ This was true even for people who most strongly expected and probably wanted the next president to be a female,” says Roger Levy, an MIT professor of brain and cognitive sciences and the senior author of the new study. “There’s a systematic underuse of ‘she’ pronouns for these kinds of contexts. It was quite eye-opening.”</p> <p>As part of their study, Levy and his colleagues also conducted similar experiments in the lead-up to the 2017 general election in the United Kingdom, which determined the next prime minister. In that case, people were more likely to use the pronoun “she” than “he” when referring to the next prime minister.</p> <p>Levy suggests that sociopolitical context may account for at least some of the differences seen between the U.S. and the U.K.: At the time, Theresa May was prime minister and very strongly expected to win, plus many Britons likely remember the long tenure of former Prime Minister Margaret Thatcher.</p> <p>“The situation was very different there because there was an incumbent who was a woman, and there is a history of referring to the prime minister as ‘she’ and thinking about the prime minster as potentially a woman,” he says.</p> <p>The lead author of the study is Titus von der Malsburg, a research affiliate at MIT and a researcher in the Department of Linguistics at the University of Potsdam, Germany. Till Poppels, a graduate student at the University of California at San Diego, is also an author of the paper, which appears in the journal <em>Psychological Science</em>.</p> <p><strong>Implicit linguistic biases</strong></p> <p>Levy and his colleagues began their study in early 2016, planning to investigate how people’s expectations about world events, specifically, the prospect of a woman being elected president, would influence their use of language. They hypothesized that the strong possibility of a female president might override the implicit bias people have toward referring to the president as “he.”</p> <p>“We wanted to use the 2016 electoral campaign as a natural experiment, to look at what kind of language people would produce or expect to hear as their expectations about who was likely to win the race changed,” Levy says.</p> <p>Before beginning the study, he expected that people’s use of the pronoun “she” would go up or down based on their beliefs about who would win the election. He planned to explore how long would it take for changes in pronoun use to appear, and how much of a boost “she” usage would experience if a majority of people expected the next president to be a woman.</p> <p>However, such a boost never materialized, even though Clinton was expected to win the election.</p> <p>The researchers performed their experiment 12 times between June 2016 and January 2017, with a total of nearly 25,000 participants from the Amazon Mechanical Turk platform. The study included three tasks, and each participant was asked to perform one of them. The first task was to predict the likelihood of three candidates winning the election — Clinton, Donald Trump, or Bernie Sanders. From those numbers, the researchers could estimate the percentage of people who believed the next president would be a woman. This number was higher than 50 percent during most of the period leading up to the election, and reached just over 60 percent right before the election.</p> <p>The next two tasks were based on common linguistics research methods — one to test people’s patterns of language production, and the other to test how the words they encounter affect their reading comprehension.</p> <p>To test language production, the researchers asked participants to complete a paragraph such as “The next U.S. president will be sworn into office in January 2017. After moving into the Oval Office, one of the first things that ….”</p> <p>In this task, about 40 percent of the participants ended up using a pronoun in their text. Early in the study period, more than 25 percent of those participants used “he,” fewer than 10 percent used “she,” and around 50 percent used “they.” As the election got closer, and Clinton’s victory seemed more likely, the percentage of “she” usage never went up, but usage of “they” climbed to about 60 percent. While these results indicate that the singular “they” has reached widespread acceptance as a de facto standard in contemporary English, they also suggest a strong persistent bias against using “she” in a context where the gender of the individual referred to is not yet known.</p> <p>“After Clinton won the primary, by late summer, most people thought that she would win. Certainly Democrats, and especially female Democrats, thought that Clinton would win. But even in these groups, people were very reluctant to use ‘she’ to refer to the next president. It was never the case that ‘she’ was preferred over ‘he,’” Levy says.</p> <p>For the third task, participants were asked to read a short passage about the next president. As the participants read the text on a screen, they had to press a button to reveal each word of the sentence. This setup allows the researchers to measure how quickly participants are reading. Surprise or difficulty in comprehension leads to longer reading times.</p> <p>In this case, the researchers found that when participants encountered the pronoun “she” in a sentence referring to the next president, it cost them about a third of a second in reading time — a seemingly short amount of time that is nevertheless known from sentence processing research to indicate a substantial disruption relative to ordinary reading — compared to sentences that used “he.” This did not change over the course of the study.</p> <p>“For months, we were in a situation where large segments of the population strongly expected that a woman would win, yet those segments of the population actually didn’t use the word ‘she’ to refer to the next president, and were surprised to encounter ‘she’ references to the next president,” Levy says.</p> <p><strong>Strong stereotypes</strong></p> <p>The findings suggest that gender biases regarding the presidency are so deeply ingrained that they are extremely difficult to overcome even when people strongly believe that the next president will be a woman, Levy says.</p> <p>“It was surprising that the stereotype that the U.S. president is always a man would so strongly influence language, even in this case, which offered the best possible circumstances for particularized knowledge about an upcoming event to override the stereotypes,” he says. “Perhaps it’s an association of different pronouns with positions of prestige and power, or it’s simply an overall reluctance to refer to people in a way that indicates they’re female if you’re not sure.”</p> <p>The U.K. component of the study was conducted in June 2017 (before the election) and July 2017 (after the election but before Theresa May had successfully formed a government). Before the election, the researchers found that “she” was used about 25 percent of the time, while “he” was used less than 5 percent of the time. However, reading times for sentences referring to the prime minister as “she” were no faster than than those for “he,” suggesting that there was still some bias against “she” in comprehension relative to usage preferences, even in a country that already has a woman prime minister.</p> <p>The type of gender bias seen in this study appears to extend beyond previously seen stereotypes that are based on demographic patterns, Levy says. For example, people usually refer to nurses as “she,” even if they don’t know the nurse’s gender, and more than 80 percent of nurses in the U.S. are female. In an ongoing study, von der Malsburg, Poppels, Levy, and recent MIT graduate Veronica Boyce have found that even for professions that have fairly equal representation of men and women, such as baker, “she” pronouns are underused.</p> <p>“If you ask people how likely a baker is to be male or female, it’s about 50/50. But if you ask people to complete text passages that are about bakers, people are twice as likely to use he as she,” Levy says. “Embedded within the way that we use pronouns to talk about individuals whose identities we don’t know yet, or whose identities may not be definitive, there seems to be this systematic underconveyance of expectations for female gender.”</p> <p>The research was funded by the National Institutes of Health, a Feodor Lynen Research Fellowship from the Alexander von Humboldt Foundation, and an Alfred P. Sloan Fellowship.</p> A new study reveals that although a significant percentage of Americans believed Hillary Clinton would win the 2016 presidential election, people rarely used the pronoun “she” when referring to the next president.Image: MIT NewsResearch, Brain and cognitive sciences, Linguistics, School of Science, Women, Behavior, Language, Politics, National Institutes of Health (NIH) School of Science recognizes members with 2020 Infinite Kilometer Awards Four members of the School of Science honored for contributions to the Institute. Fri, 03 Jan 2020 10:30:01 -0500 School of Science <p>The MIT <a href="">School of Science</a> has announced the winners of the 2020 Infinite Kilometer Awards, which are presented annually to researchers within the school who are exceptional contributors to their communities.</p> <p>These winners are nominated by their peers and mentors for their hard work, which can include mentoring and advising, supporting educational programs, providing service to groups such as the MIT Postdoctoral Association, or some other form of contribution to their home departments, labs, and research centers, the school, and the Institute.</p> <p>The 2020 Infinite Kilometer Award winners in the School of Science are:</p> <ul> <li><a href="" target="_blank">Edgar Costa</a>, a research scientist in the Department of Mathematics, nominated by Professor Bjorn Poonen and Principal Research Scientist Andrew Sutherland;</li> <li><a href="" target="_blank">Casey Rodriguez</a>, an instructor in the Department of Mathematics, nominated by Professor Gigliola Staffilani;</li> <li><a href="" target="_blank">Rachel Ryskin</a>, a postdoc in the Department of Brain and Cognitive Sciences, nominated by Professor Edward Gibson; and</li> <li><a href="" target="_blank">Grayson Sipe</a>, a postdoc in the Picower Institute for Learning and Memory, nominated by Professor Mriganka Sur.</li> </ul> <p>A monetary award is granted to recipients, and a celebratory reception will be held later this spring in their honor, attended by those who nominated them, family, and friends, in addition to the soon-to-be-announced recipients of the 2020 Infinite Mile Award.</p> School of Science, Mathematics, Brain and cognitive sciences, Picower Institute, Awards, honors and fellowships, Graduate, postdoctoral, Staff, Community Study may explain how infections reduce autism symptoms An immune molecule sometimes produced during infection can influence the social behavior of mice. Wed, 18 Dec 2019 13:00:00 -0500 Anne Trafton | MIT News Office <p>For many years, some parents have noticed that their autistic children’s behavioral symptoms diminished when they had a fever. This phenomenon has been documented in at least two large-scale studies over the past 15 years, but it was unclear why fever would have such an effect.</p> <p>A new study from MIT and Harvard Medical School sheds light on the cellular mechanisms that may underlie this phenomenon. In a study of mice, the researchers found that in some cases of infection, an immune molecule called IL-17a is released and suppresses a small region of the brain’s cortex that has previously been linked to social behavioral deficits in mice.</p> <p>“People have seen this phenomenon before [in people with autism], but it’s the kind of story that is hard to believe, which I think stems from the fact that we did not know the mechanism,” says Gloria Choi, the Samuel A. Goldblith Career Development Assistant Professor of Applied Biology and an assistant professor of brain and cognitive sciences at MIT. “Now the field, including my lab, is trying hard to show how this works, all the way from the immune cells and molecules to receptors in the brain, and how those interactions lead to behavioral changes.”</p> <p>Although findings in mice do not always translate into human treatments, the study may help to guide the development of strategies that could help to reduce some behavioral symptoms of autism or other neurological disorders, says Choi, who is also a member of MIT’s Picower Institute for Learning and Memory.</p> <p>Choi and Jun Huh, an assistant professor of immunology at Harvard Medical School, are the senior authors of <a href="" target="_blank">the study</a>, which appears in <em>Nature</em> today. The lead authors of the paper are MIT graduate student Michael Douglas Reed and MIT postdoc Yeong Shin Yim.</p> <p><strong>Immune influence</strong></p> <p>Choi and Huh have previously explored other links between inflammation and autism. In 2016, <a href="">they showed</a> that mice born to mothers who experience a severe infection during pregnancy are much more likely to show behavioral symptoms such as deficits in sociability, repetitive behaviors, and abnormal communication. They found that this is caused by exposure to maternal IL-17a, which produces defects in a specific brain region of the developing embryos. This brain region, S1DZ, is part of the somatosensory cortex and is believed to be responsible for sensing where the body is in space.</p> <p>“Immune activation in the mother leads to very particular cortical defects, and those defects are responsible for inducing abnormal behaviors in offspring,” Choi says.</p> <p>A link between infection during pregnancy and autism in children has also been seen in humans. A 2010 study that included all children born in Denmark between 1980 and 2005 found that severe viral infections during the first trimester of pregnancy translated to a threefold increase in risk for autism, and serious bacterial infections during the second trimester were linked with a 1.42-fold increase in risk. These infections included influenza, viral gastroenteritis, and severe urinary tract infections.</p> <p>In the new study, Choi and Huh turned their attention to the often-reported link between fever and reduction of autism symptoms.</p> <p>“We wanted to ask whether we could use mouse models of neurodevelopmental disorders to recapitulate this phenomenon,” Choi says. “Once you see the phenomenon in animals, you can probe the mechanism.”</p> <p>The researchers began by studying mice that exhibited behavioral symptoms due to exposure to inflammation during gestation. They injected these mice with a bacterial component called LPS, which induces a fever response, and found that the animals’ social interactions were temporarily restored to normal.</p> <p>Further experiments revealed that during inflammation, these mice produce IL-17a, which binds to receptors in S1DZ — the same brain region originally affected by maternal inflammation. IL-17a reduces neural activity in S1DZ, which makes the mice temporarily more interested in interacting with other mice.</p> <p>If the researchers inhibited IL-17a or knocked out the receptors for IL-17a, this symptom reversal did not occur. They also showed that simply raising the mice’s body temperature did not have any effect on behavior, offering further evidence that IL-17a is necessary for the reversal of symptoms.</p> <p>“This suggests that the immune system uses molecules like IL-17a to directly talk to the brain, and it actually can work almost like a neuromodulator to bring about these behavioral changes,” Choi says. “Our study provides another example as to how the brain can be modulated by the immune system.”</p> <p>“What’s remarkable about this paper is that it shows that this effect on behavior is not necessarily a result of fever but the result of cytokines being made,” says Dan Littman, a professor of immunology at New York University, who was not involved in the study. “There’s a growing body of evidence that the central nervous system, in mammals at least, has evolved to be dependent to some degree on cytokine signaling at various times during development or postnatally.”</p> <p><strong>Behavioral effects</strong></p> <p>The researchers then performed the same experiments in three additional mouse models of neurological disorders. These mice lack a gene linked to autism and similar disorders — either Shank3, Cntnap2, or Fmr1. These mice all show deficits in social behavior similar to those of mice exposed to inflammation in the womb, even though the origin of their symptoms is different.</p> <p>Injecting those mice with LPS did produce inflammation, but it did not have any effect on their behavior. The reason for that, the researchers found, is that in these mice, inflammation did not stimulate IL-17a production. However, if the researchers injected IL-17a into these mice, their behavioral symptoms did improve.</p> <p>This suggests that mice who are exposed to inflammation during gestation end up with their immune systems somehow primed to more readily produce IL-17a during subsequent infections. Choi and Huh have <a href="">previously shown</a> that the presence of certain bacteria in the gut can also prime IL-17a responses. They are now investigating whether the same gut-residing bacteria contribute to the LPS-induced reversal of social behavior symptoms that they found in the new <em>Nature</em> study.</p> <p>“It was amazing to discover that the same immune molecule, IL-17a, could have dramatically opposite effects depending on context: Promoting autism-like behaviors when it acts on the developing fetal brain and ameliorating autism-like behaviors when it modulates neural activity in the adult mouse brain. This is the degree of complexity we are trying to make sense of,” Huh says.</p> <p>Choi’s lab is also exploring whether any immune molecules other than IL-17a may affect the brain and behavior.</p> <p>“What’s fascinating about this communication is the immune system directly sends its messengers to the brain, where they work as if they’re brain molecules, to change how the circuits work and how the behaviors are shaped,” Choi says.</p> <p>The research was funded by the Jeongho Kim Neurodevelopmental Research Fund, Perry Ha, the Hock E. Tan and K. Lisa Yang Center for Autism Research, the Simons Center for the Social Brain, the Simons Foundation Autism Research Initiative, the Champions of the Brain Weedon Fellowship, and a National Science Foundation Graduate Research Fellowship.</p> MIT and Harvard Medical School researchers have uncovered a cellular mechanism that may explain why some children with autism experience a temporary reduction in behavioral symptoms when they have a fever.Research, Brain and cognitive sciences, Picower Institute, School of Science, Autism, National Science Foundation (NSF) Study probing visual memory and amblyopia unveils many-layered mystery Scientists pinpoint the role of a receptor in vision degradation in amblyopia. Tue, 17 Dec 2019 15:40:01 -0500 David Orenstein | Picower Institute <div> <div> <div> <div> <p>In decades of studying how neural circuits in the brain’s visual cortex adapt to experience, MIT Professor Mark Bear’s lab has followed the science wherever it has led. This approach has yielded the discovery of cellular mechanisms serving visual recognition memory, in which the brain learns what sights are familiar so it can focus on what’s new, as well as a potential therapy for <a href="">amblyopia</a>, a disorder where children born with disrupted vision in one eye can lose visual acuity there permanently without intervention. But this time, his lab’s latest investigation has yielded surprising new layers of mystery.</p> </div> </div> </div> </div> <div> <div> <div> <div> <div> <div> <div> <div> <div> <div> <div> <p>Heading into the experiments described in a new paper in <em>Cerebral Cortex, </em>Bear and his team expected to confirm that key proteins called NMDA receptors act specifically in neurons in layer 4 of the visual cortex to make the circuit connection changes, or “plasticity,” necessary for both visual recognition memory and amblyopia. Instead, the study has produced unexpectedly divergent results.</p> <p>“There are two stories here,” says Bear, who is a co-senior author and the Picower Professor of Neuroscience in the Picower Institute for Learning and Memory. “One is that we have further pinpointed where to look for the root causes of amblyopia. The other is that we have now completely blown up what we thought was happening in stimulus-selective response potentiation, or SRP, the synaptic change believed to give rise to visual recognition memory.”</p> <p>The cortex is built like a stack of pancakes, with distinct layers of cells serving different functions. Layer 4 is considered to be the primary “input layer” that receives relatively unprocessed information from each eye. Plasticity that is restricted to one eye has been assumed to occur at this early stage of cortical processing, before the information from the two eyes becomes mixed. However, while the evidence demonstrates that NMDA receptors in layer 4 neurons are indeed necessary for the degradation of vision in a deprived eye, they apparently play no role in how neural connections, or synapses, serving the uncompromised eye strengthen to compensate, and similarly don’t matter for the development of SRP. That’s even though NMDA receptors in visual cortex neurons have directly been shown to matter in these phenomena before, and layer 4 neurons are known to participate in these circuits via telltale changes in electrical activity.</p> <p>“These findings reveal two key things,” says Samuel Cooke, co-senior author and a former member of the Bear Lab who now has his own at King’s College London. “First, that the neocortical circuits modified to enhance cortical responses to sensory inputs during deprivation, or to stimuli that have become familiar, reside elsewhere in neocortex, revealing a complexity that we had not previously appreciated. Second, the results show that effects can be clearly manifest in a region of the brain that are actually echoes of plasticity occurring elsewhere, thereby illustrating the importance of not only observing biological phenomena, but also understanding their origins by locally disrupting known underlying mechanisms.”</p> <p>To perform the study, Bear Lab postdoc and lead author Ming-fai Fong used a genetic technique to specifically knock out NMDA receptors in excitatory neurons in layer 4 of the visual cortex of mice. Armed with that tool, she could then investigate the consequences for visual recognition memory and “monocular deprivation,” a lab model for amblyopia in which one eye is temporarily closed early in life. The hypothesis was that knocking out the NMDA receptor in these cells in layer 4 would prevent SRP from taking hold amid repeated presentations of the same stimulus, and would prevent the degradation of vision in a deprived eye, as well as the commensurate strengthening of the unaffected eye.</p> <p>“We were gratified to note that the amblyopia-like effect of losing cortical vision as a result of closing an eye for several days in early life was completely prevented,” Cooke says. “However, we were stunned to find that the two enhancing forms of plasticity remained completely intact.”</p> <p>Fong says that with continued work, the lab hopes to pinpoint where in the circuit NMDA receptors are triggering SRP, and the compensatory increase in strength in a non-deprived eye, after monocular deprivation. Doing so, she says, could have clinical implications.</p> <p>“Our study identified a crucial component in the visual cortical circuit that mediates the plasticity underlying amblyopia,” she says.&nbsp;“This study also highlights the ongoing need to identify the distinct components in the visual cortical circuit that mediate visual enhancement, which could be important both in developing treatments for visual disability as well as developing biomarkers for neurodevelopmental disorders. These efforts are ongoing in the lab.”</p> <p>The search now moves to other layers, Bear said, including layer 6, which also receives unprocessed input from each eye.</p> <p>“Clearly, this is not the end of the story,” Bear says.</p> <p>In addition to Fong, Bear, and Cooke, the paper’s other authors are Peter Finnie, Taekeun Kim, Aurore Thomazeau, and Eitan Kaplan.</p> <p>The National Eye Institute and the JPB Foundation funded the study.</p> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> The visual cortex, where the brain processes visual input, is made like a stack of pancakes. In a new study, scientists sought to determine the role in several visual phenomena of a receptor on neurons in layer 4. Photo: Bear Lab/Picower InstitutePicower Institute, Brain and cognitive sciences, School of Science, Neuroscience, Vision, Genetics, Memory, Research Supporting students in Puerto Rico after a hurricane’s devastation Postdoc Héctor De Jesús-Cortés works to build up the STEM pipeline from his homeland to MIT and beyond. Fri, 13 Dec 2019 00:00:00 -0500 Fernanda Ferreira | School of Science <p>When Hurricane Maria hit Puerto Rico in September 2017, Héctor De Jesús-Cortés was vacationing on the island with his wife, Edmarie Guzmán-Vélez. “Worst vacation ever, but it actually turned out to be the most important in my life,” says De Jesús-Cortés. In the days immediately after the hurricane, both focused on helping their families get their bearings; after that first week, however, they were itching to do more. That itch would take them to San Juan, Puerto Rico’s capital, where they asked the then-secretary of education a simple question: “How can we help?”</p> <p>With De Jesús-Cortés’ PhD in neuroscience and Guzmán-Vélez’s PhD in clinical psychology, they soon became involved in an effort led by the Department of Education to help students and school staff, as well as the community at large, troubled by the hurricane. “Everyone was traumatized, so if you bring kids to teachers who are also traumatized, that’s a bad recipe,” explains De Jesús-Cortés.</p> <p>De Jesús-Cortés and Guzmán-Vélez connected with their friend Rosaura Orengo-Aguayo, a clinical psychologist and assistant professor at the Medical University of South Carolina who studies traumatic stress and Hispanic populations. Working together with the Department of Education and the U.S. Department of Health and Human Services, they developed a program to address trauma in schools. The Esperanza, or “promise,” program is ongoing and has already trained hundreds of school staff members on how to manage trauma and anxiety, and to identify these manifestations in students. &nbsp;</p> <p>Back in Boston, De Jesús-Cortés has continued his efforts for Puerto Rico, raising funds for micro-entrepreneurs and teaching neuroscience in online classes for undergraduates on the island. Each effort is guided by that same simple question — How can we help? His latest effort along with Guzmán-Vélez is a precollege summer program at MIT that will give Puerto Rican students a taste for scientific research. &nbsp;</p> <p><strong>A sense of possibility</strong></p> <p>For De Jesús-Cortés, teaching is more than just a transfer of knowledge. “I see teaching as mentorship,” he says. “I want students to be exposed to opportunities, because growing up in Puerto Rico, I know how difficult it can be for some students to get those opportunities.”</p> <p>While De Jesús-Cortés was an undergraduate at the University of Puerto Rico, he participated in Minority Access for Research Careers (MARC), a National Institutes of Health-funded program that supports underrepresented minority undergraduates as they move toward careers in biomedical sciences. “We had workshops every month about applications; they would bring recruiters, and they would also pay for summer internships,” explains De Jesús-Cortés.</p> <p>MARC allowed De Jesús-Cortés to see a career in science as a possibility, and he envisions that the summer school, whose inaugural class will be in summer 2020, will do something similar. “The idea is to have kids first spend two weeks in Puerto Rico and expose them to research at the undergraduate level,” explains De Jesús-Cortés. The students will be at the Universidad del Sagrado Corazón in Puerto Rico; the university has partnered with De Jesús-Cortés on the project. “Then they travel to Boston and see what research is happening here.” The 15-20 students will spend two weeks in Massachusetts, living in the MIT dorms, visiting labs, and learning how to apply to colleges in the United States.</p> <p>The MARC program also gave De Jesús-Cortés a community. “To this day, I talk to my MARC fellows,” he says, and that’s something he hopes to replicate with the summer students. “Each student will have a mentor, and I want them to keep talking after the program,” De Jesús-Cortés says.</p> <p>The summer school will not just give students a taste of scientific research, it will also show that universities like MIT are within their reach. “I was born and raised in Puerto Rico, and my schools didn't have the best resources in STEM,” De Jesús-Cortés says. He hopes that, by seeing researchers in Greater Boston that have the same background, the summer students will see MIT and a career in science as a possibility. “Students need to be exposed to mentors and role models that prove that it can be done,” he says.</p> <p><strong>Fixing vision</strong></p> <p>De Jesús-Cortés works on the summer school, and his other efforts for Puerto Rico and the Latino community, in addition to his neuroscience research. As a postdoc in the lab of Mark Bear, the Picower Professor of Neuroscience, he’s trying to use electrophysiology to figure out when neurons in the brain need a little help to communicate.<br /> &nbsp;<br /> Neurons communicate with one another using both chemical and electrical activity. An action potential, which is electrical, travels down the arms of the neuron, but when it reaches the end of that arm, the synapse, the communication becomes chemical. The electrical signal stimulates the release of neurotransmitters, which reach across the gap between two neurons, stimulating the neighboring neuron to make its own action potential.<br /> Not every neuron is equally capable of producing action potentials. “In a neurodegenerative disorder, before the neuron dies, it’s sick,” says De Jesús-Cortés. “And if it’s sick, it’s not going to communicate electrically very well.” De Jesús-Cortés wants to use this diminished electrical activity as a biomarker for disorders in the brain. “If I can detect that diminished activity with an electrode, then I can intervene with a pharmacological agent that will prevent the death of neurons,” he explains.</p> <p>To test this, De Jesús-Cortés is focusing on amblyopia, a condition more commonly known as lazy eye. Lazy eye happens when the communication between the visual cortex — a region in the back of the brain where visual information is received and processed — and one of the eyes is impaired, resulting in blurred vision. Electrical activity in the visual cortex that corresponds to the lazy eye is also down, and De Jesús-Cortés can detect that decreased activity using electrodes. &nbsp;</p> <p>When amblyopia is caught early on, a combination of surgery and an eye patch can strengthen the once-lazy eye, getting rid of the blurriness. “But, if you catch that condition after 8 years old, the patching doesn’t work as well,” says De Jesús-Cortés. Another postdoc in the Bear Lab, Ming-fai Fong, figured out that tetrodotoxin, which is found in puffer fish, is able to reboot the lazy eye, bringing up electrical activity in the visual cortex and giving mice with amblyopia perfect vision mere hours after receiving a drop of the toxin.</p> <p>But we don’t actually know how tetrodotoxin is doing this on a molecular level. “Now, putting tetrodotoxin in humans will be a little bit difficult,” says De Jesús-Cortés. Add too much toxin and you could cause a number of new problems. He is investigating what exactly the toxin is doing to sick neurons. Using that information, he then wants to design alternative treatments that have the same or even better effect: “Find neurons that are quiet because they are sick, and reboot them with a pharmacological agent,” he says.</p> <p>In the future, De Jesús-Cortés wants to look beyond the visual cortex, at other regions of the brain and other conditions like Parkinson, Alzheimer’s, and autism, finding the hurting neurons and giving them a boost.<br /> In both his neuroscience research and his work for Puerto Rico, De Jesús-Cortés is passionate about finding ways to help. But he has also learned that for all these efforts to succeed, he needs to accept help as well. “When you are working on so many projects at the same time, you need a lot of different people that believe in your vision,” he says. “And if you’re helping them, you believe in their vision.” For De Jesús-Cortés, this reciprocity is one of the most important aspects of his work, and it’s a guiding principle in his research and life. “I believe in collaboration like nothing else."</p> At MIT, Héctor De Jesús-Cortés studies neuronal electrical activity underlying diseases such as amblyopia, or lazy eye.Photo: Steph StevensPicower Institute, Brain and cognitive sciences, School of Science, Diversity and inclusion, Research, Profile, graduate, Graduate, postdoctoral, Natural disasters, Latin America, Education, teaching, academics Differences between deep neural networks and human perception Stimuli that sound or look like gibberish to humans are indistinguishable from naturalistic stimuli to deep networks. Thu, 12 Dec 2019 13:05:01 -0500 Kenneth I. Blum | Center for Brains, Minds and Machines <p>When your mother calls your name, you know it’s her voice — no matter the volume, even over a poor cell phone connection. And when you see her face, you know it’s hers — if she is far away, if the lighting is poor, or if you are on a bad FaceTime call. This robustness to variation is a hallmark of human perception. On the other hand, we are susceptible to illusions: We might fail to distinguish between sounds or images that are, in fact, different. Scientists have explained many of these illusions, but we lack a full understanding of the invariances in our auditory and visual systems.
</p> <p>Deep neural networks also have performed speech recognition and image classification tasks with impressive robustness to variations in the auditory or visual stimuli. But are the invariances learned by these models similar to the invariances learned by human perceptual systems? A group of MIT researchers has discovered that they are different. They presented their findings yesterday at the 2019 <a href="">Conference on Neural Information Processing Systems</a>.
</p> <p>The researchers made a novel generalization of a classical concept: “metamers” — physically distinct stimuli that generate the same perceptual effect. The most famous examples of metamer stimuli arise because most people have three different types of cones in their retinae, which are responsible for color vision. The perceived color of any single wavelength of light can be matched exactly by a particular combination of three lights of different colors — for example, red, green, and blue lights. Nineteenth-century scientists inferred from this observation that humans have three different types of bright-light detectors in our eyes. This is the basis for electronic color displays on all of the screens we stare at every day. Another example in the visual system is that when we fix our gaze on an object, we may perceive surrounding visual scenes that differ at the periphery as identical. In the auditory domain, something analogous can be observed. For example, the “textural” sound of two swarms of insects might be indistinguishable, despite differing in the acoustic details that compose them, because they have similar aggregate statistical properties. In each case, the metamers provide insight into the mechanisms of perception, and constrain models of the human visual or auditory systems.
</p> <p>In the current work, the researchers randomly chose natural images and sound clips of spoken words from standard databases, and then synthesized sounds and images so that deep neural networks would sort them into the same classes as their natural counterparts. That is, they generated physically distinct stimuli that are classified identically by models, rather than by humans. This is a new way to think about metamers, generalizing the concept to swap the role of computer models for human perceivers. They therefore called these synthesized stimuli “model metamers” of the paired natural stimuli. The researchers then tested whether humans could identify the words and images.
</p> <p>“Participants heard a short segment of speech and had to identify from a list of words which word was in the middle of the clip. For the natural audio this task is easy, but for many of the model metamers humans had a hard time recognizing the sound,” explains first-author Jenelle Feather, a graduate student in the <a href="" target="_blank">MIT Department of Brain and Cognitive Sciences</a> (BCS) and a member of the <a href="" target="_blank">Center for Brains, Minds, and Machines</a> (CBMM). That is, humans would not put the synthetic stimuli in the same class as the spoken word “bird” or the image of a bird. In fact, model metamers generated to match the responses of the deepest layers of the model were generally unrecognizable as words or images by human subjects.
</p> <p><a href="">Josh McDermott</a>, associate professor in BCS and investigator in CBMM, makes the following case: “The basic logic is that if we have a good model of human perception, say of speech recognition, then if we pick two sounds that the model says are the same and present these two sounds to a human listener, that human should also say that the two sounds are the same. If the human listener instead perceives the stimuli to be different, this is a clear indication that the representations in our model do not match those of human perception.”
</p> <p>Examples of the model metamer stimuli can be found in the video below.</p> <div class="cms-placeholder-content-video"></div> <p>Joining Feather and McDermott on the paper are Alex Durango, a post-baccalaureate student, and Ray Gonzalez, a research assistant, both in BCS.
</p> <p>There is another type of failure of deep networks that has received a lot of attention in the media: adversarial examples (see, for example, "<a href="">Why did my classifier just mistake a turtle for a rifle?</a>"). These are stimuli that appear similar to humans but are misclassified by a model network (by design — they are constructed to be misclassified). They are complementary to the stimuli generated by Feather's group, which sound or appear different to humans but are designed to be co-classified by the model network. The vulnerabilities of model networks exposed to adversarial attacks are well-known — face-recognition software might mistake identities; automated vehicles might not recognize pedestrians.
</p> <p>The importance of this work lies in improving models of perception beyond deep networks. Although the standard adversarial examples indicate differences between deep networks and human perceptual systems, the new stimuli generated by the McDermott group arguably represent a more fundamental model failure — they show that generic examples of stimuli classified as the same by a deep network produce wildly different percepts for humans.
</p> <p>The team also figured out ways to modify the model networks to yield metamers that were more plausible sounds and images to humans. As McDermott says, “This gives us hope that we may be able to eventually develop models that pass the metamer test and better capture human invariances.”
</p> <p>“Model metamers demonstrate a significant failure of present-day neural networks to match the invariances in the human visual and auditory systems,” says Feather, “We hope that this work will provide a useful behavioral measuring stick to improve model representations and create better models of human sensory systems.”
</p> Associate Professor Josh McDermott (left) and graduate student Jenelle Feather generated physically distinct stimuli that are classified identically by models, rather than by humans.Photos: Justin Knight and Kris BrewerBrain and cognitive sciences, Center for Brains Minds and Machines, McGovern Institute, Research, Machine learning, Artificial intelligence, School of Science, Neuroscience Fueled by the power of stories A fascination with storytelling led K. Guadalupe Cruz to graduate studies in neuroscience and shapes her work to promote inclusivity at MIT. Thu, 05 Dec 2019 00:00:00 -0500 Fernanda Ferreira | School of Science <p>K. Guadalupe Cruz’s path into neuroscience began with storytelling.</p> <p>“For me, it was always interesting that we are capable of keeping knowledge over so many generations,” says Cruz, a PhD student in the Department of Brain and Cognitive Sciences. For millennia, information has been passed down through the stories shared by communities, and Cruz wanted to understand how that information was transferred from one person to the next. “That was one of my first big questions,” she says.</p> <p>Cruz has been asking this question since high school and the urge to answer it led her to anthropology, psychology, and linguistics, but she felt like something was missing. “I wanted a mechanism,” she explains. “So I kept going further and further, and eventually ended up in neuroscience.”</p> <p>As an undergraduate at the University of Arizona, Cruz became fascinated with the sheer complexity of the brain. “We started learning a lot about different animals and how their brains worked,” says Cruz. “I just thought it was so cool,” she adds. That fascination got her into the lab and Cruz has never left. “I’ve been doing research ever since.”</p> <p><strong>A sense of space </strong></p> <p>If you’ve ever seen a model of the brain, you’ve probably seen one that is divided into regions, each shaded with a different color and with its own distinct function. The frontal lobe in red plans, the cerebellum in blue coordinates movement, the hippocampus in green remembers. But this is an oversimplification.</p> <p>“The brain isn’t entirely modular,” says Cruz. Different parts of the brain don’t have a single function, but rather a number of functions, and their complexity increases toward the front of the brain. The intricacy of these frontal regions is embodied in their anatomy: “They have a lot of cells and they’re heavily interconnected,” she explains. These frontal regions encode many types of information, which means they are involved in a number of different functions, sometimes in abstract ways that are difficult to unravel.</p> <p>The frontal region Cruz is bent on demystifying is the anterior cingulate cortex, or ACC, a part of the brain that wraps around the corpus callosum, which divides the outer layers of the brain into left and right hemispheres. Working with mice in Professor Mriganka Sur’s lab, Cruz looks at the role of the ACC in coordinating different downstream brain structures in orientating tasks. In humans, the ACC is involved in motivation, but in mice it has a role in visually guided orienting.</p> <p>“Everything you experience in the world is relative to your own body,” says Cruz. Being able to determine where your body is in space is essential for navigating through the world. To explain this, Cruz gives the example of driver making a turn. “If you have to do a left turn, you’re going to need to use different information to determine whether you’re allowed to make that turn and if that’s the right choice,” Cruz explains. The ACC in this analogy is the driver: It has to take in all the information about the surrounding world, decide what to do, and then send this decision to other parts of the brain that control movement.</p> <p>To study this, Cruz gives mice a simple task: She shows them two squares of different shades on a screen and asks them to move the darker square. “The idea is, how does this area of the brain take in this information, compare the two squares and decide which movement is correct,” she explains. Many researchers study how information gets to the ACC, but Cruz is interested in what happens after the information arrives, focusing on the processing and output ends of the equation, particularly in deciphering the contributions of different brain connections to the resulting action.</p> <p>Cruz uses optogenetics to figure out which areas of the brain are necessary for decision-making. Optogenetics is a technique that uses light to turn on or off previously targeted neurons or areas of the brain. “This allows us to causally test whether parts of a circuit are required for a behavior or not,” she explains. Cruz distills it even further: “But mostly, it just lets us know that if you screw with this area, you’re going to screw something up.”</p> <p><strong>Community builder </strong></p> <p>At MIT, Cruz has been able to ask the neuroscience questions she’s captivated by, but coming to the Institute also made her more aware of how few underrepresented minorities, or URMs, there are in science broadly. “I started realizing how academia is not built for us, or rather, is built to exclude us,” says Cruz. “I saw these problems, and I wanted to do something to address them.”</p> <p>Cruz has focused many of her efforts on community building. “A lot of us come from communities that are very ‘other’ oriented, and focused on helping one another,” she explains. One of her initiatives is Community Lunch, a biweekly casual lunch in the brain and cognitive sciences department. “It’s sponsored by the School of Science for basically anybody that’s a person of color in academia,” says Cruz. The lunch includes graduate students, postdocs, and technicians who come together to talk about their experiences in academia. “It’s kind of like a support group,” she says. Connecting with people that have shared experiences is important, she adds: “You get to talk about things and realize this is a feeling that a lot of people have.”</p> <p>Another goal of Cruz’s is to make sure MIT understands the hurdles that many URMs experience in academia. For instance, applying to graduate school or having to cover costs for conferences can put a real strain on finances. “I applied to 10 programs; I was eating cereal every day for a month,” remembers Cruz. “I try to bring that information to light, because faculty and administrators have often never experienced it.”</p> <p>Cruz also is the representative for the LGBT community on the MIT Graduate Student Council and a member of LGBT Grad, a student group run by and for MIT’s LGBT grad students and postdocs. “LGBT Grad is basically a social club for the community, and we try to organize events to get to know each other,” says Cruz. According to Cruz, graduate school can feel pretty lonely for members of the LGBT community, so, similar to her work with URMs, Cruz concentrates on bringing people together. “I can’t fix the whole system, which can be very frustrating at times, but I focused my efforts on supporting people and allowing us to build a community.”</p> <p>As in her research, Cruz again comes back to the importance of storytelling. In her activism on campus, she wants to make sure the stories of URMs are known and, in doing so, help remove the obstacles faced by that generations of students that come after her.</p> K. Guadalupe Cruz studies the neuroscience of decision-making and creates community in the Department of Brain and Cognitive Sciences. Photo: Steph StevensSchool of Science, Brain and cognitive sciences, Picower Institute, Students, Graduate, postdoctoral, Diversity and inclusion, Women in STEM, Student life, Profile, Community, Neuroscience Controlling attention with brain waves Study shows that people can boost attention by manipulating their own alpha brain waves. Wed, 04 Dec 2019 10:52:23 -0500 Anne Trafton | MIT News Office <p>Having trouble paying attention? MIT neuroscientists may have a solution for you: Turn down your alpha brain waves. In a new study, the researchers found that people can enhance their attention by controlling their own alpha brain waves based on neurofeedback they receive as they perform a particular task.</p> <p>The study found that when subjects learned to suppress alpha waves in one hemisphere of their parietal cortex, they were able to pay better attention to objects that appeared on the opposite side of their visual field. This is the first time that this cause-and-effect relationship has been seen, and it suggests that it may be possible for people to learn to improve their attention through neurofeedback.</p> <p>“There’s a lot of interest in&nbsp;using neurofeedback to try to help people with various brain disorders and behavioral problems,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “It’s a completely noninvasive way of controlling and testing the role of different types of brain activity.”</p> <p>It’s unknown how long these effects might last and whether this kind of control could be achieved with other types of brain waves, such as beta waves, which are linked to Parkinson’s disease. The researchers are now planning additional studies of whether this type of neurofeedback training might help people suffering from attentional or other neurological disorders.</p> <p>Desimone is the senior author of <a href="" target="_blank">the paper</a>, which appears in <em>Neuron</em> on Dec. 4. McGovern Institute postdoc Yasaman Bagherzadeh is the lead author of the study. Daniel Baldauf, a former McGovern Institute research scientist, and Dimitrios Pantazis, a McGovern Institute principal research scientist, are also authors of the paper.</p> <p><strong>Alpha and attention</strong></p> <p>There are billions of neurons in the brain, and their combined electrical signals generate oscillations known as brain waves. Alpha waves, which oscillate in the frequency of 8 to 12 hertz, are believed to play a role in filtering out distracting sensory information.</p> <p>Previous studies have shown a strong correlation between attention and alpha brain waves, particularly in the parietal cortex. In humans and in animal studies, a decrease in alpha waves has been linked to enhanced attention. However, it was unclear if alpha waves control attention or are just a byproduct of some other process that governs attention, Desimone says.</p> <p>To test whether alpha waves actually regulate attention, the researchers designed an experiment in which people were given real-time feedback on their alpha waves as they performed a task. Subjects were asked to look at a grating pattern in the center of a screen, and told to use mental effort to increase the contrast of the pattern as they looked at it, making it more visible.</p> <p>During the task, subjects were scanned using magnetoencephalography (MEG), which reveals brain activity with millisecond precision. The researchers measured alpha levels in both the left and right hemispheres of the parietal cortex and calculated the degree of asymmetry between the two levels. As the asymmetry between the two hemispheres grew, the grating pattern became more visible, offering the participants real-time feedback.</p> <p>Although subjects were not told anything about what was happening, after about 20 trials (which took about 10 minutes), they were able to increase the contrast of the pattern. The MEG results indicated they had done so by controlling the asymmetry of their alpha waves.</p> <p>“After the experiment, the subjects said they knew that they were controlling the contrast, but they didn’t know how they did it,” Bagherzadeh says. “We think the basis is conditional learning — whenever you do a behavior and you receive a reward, you’re reinforcing that behavior. People usually don’t have any feedback on their brain&nbsp;activity, but when we provide it to them and reward them, they learn by practicing.”</p> <p><img alt="" src="/sites/" style="width: 500px; height: 282px;" /></p> <p>Although the subjects were not consciously aware of how they were manipulating their brain waves, they were able to do it, and this success translated into enhanced attention on the opposite side of the visual field. As the subjects looked at the pattern in the center of the screen, the researchers flashed dots of light on either side of the screen. The participants had been told to ignore these flashes, but the researchers measured how their visual cortex responded to them.</p> <p>One group of participants was trained to suppress alpha waves in the left side of the brain, while the other was trained to suppress the right side. In those who had reduced alpha on the left side, their visual cortex showed a larger response to flashes of light on the right side of the screen, while those with reduced alpha on the right side responded more to flashes seen on the left side.</p> <p>“Alpha manipulation really was controlling people’s attention, even though they didn’t have any clear understanding of how they were doing it,” Desimone says.</p> <p><strong>Persistent effect</strong></p> <p>After the neurofeedback training session ended, the researchers asked subjects to perform two additional tasks that involve attention, and found that the enhanced attention persisted. In one experiment, subjects were asked to watch for a grating pattern, similar to what they had seen during the neurofeedback task, to appear. In some of the trials, they were told in advance to pay attention to one side of the visual field, but in others, they were not given any direction.</p> <p>When the subjects were told to pay attention to one side, that instruction was the dominant factor in where they looked. But if they were not given any cue in advance, they tended to pay more attention to the side that had been favored during their neurofeedback training.</p> <p>In another task, participants were asked to look at an image such as a natural outdoor scene, urban scene, or computer-generated fractal shape. By tracking subjects’ eye movements, the researchers found that people spent more time looking at the side that their alpha waves had trained them to pay attention to.</p> <p>“It is promising that the effects did seem to persist afterwards,” says Desimone, though more study is needed to determine how long these effects might last.</p> <p>“It would be interesting to understand how long-lasting these effects are, and whether you can use them therapeutically, because there’s some evidence that alpha oscillations are different in people who have attention deficits and hyperactivity disorders,” says Sabine Kastner, a professor of psychology at the Princeton Neuroscience Institute, who was not involved in the research. “If that is the case, then at least in principle, one might use this neurofeedback method to enhance their attention.”</p> <p>The research was funded by the McGovern Institute.</p> MIT neuroscientists have shown that people can enhance their attention by using neurofeedback to decrease alpha waves in one side of the parietal cortex.Image: Yasaman BaghezadehResearch, Behavior, Brain and cognitive sciences, McGovern Institute, School of Science, Neuroscience Two MIT professors named 2019 fellows of the National Academy of Inventors Li-Huei Tsai and Christopher Schuh recognized for research innovations addressing Alzheimer’s disease and metal mechanics. Tue, 03 Dec 2019 10:00:01 -0500 David Orenstein | Picower Institute <p>The National Academy of Inventors has selected two MIT faculty members, neuroscientist Li-Huei Tsai and materials scientist Christopher Schuh, as members of its 2019 class of new fellows.</p> <p>NAI fellows “have demonstrated a highly prolific spirit of innovation in creating or facilitating outstanding inventions that have made a tangible impact on the quality of life, economic development and welfare of society,” the organization stated in its announcement.</p> <p>Schuh is the department head and the Danae and Vasilis Salapatas Professor of Metallurgy in the Department of Materials Science and Engineering. His&nbsp;research is focused on structural metallurgy and seeks to control disorder in metallic microstructures for the purpose of optimizing mechanical properties; much of his work is on the design and control of grain boundary structure and chemistry.</p> <p>Schuh has published dozens of patents and co-founded a number of metallurgical companies. His first MIT spinout company, Xtalic Corporation, commercialized a process from his MIT laboratory to produce stable nanocrystalline coatings, which have now been deployed in over 10 billion individual components in use worldwide. Schuh’s startup Desktop Metal is a metal additive manufacturing company developing 3D metal printers that are sufficiently simpler and lower-cost than current options to enable broad use across many industries. Recently, Schuh co-founded Veloxint Corporation, which is commercializing machine components made from custom stable nanocrystalline alloys designed in his MIT laboratory.</p> <p>Tsai, the Picower Professor of Neuroscience and director of the Picower Institute for Learning and Memory, focuses on neurodegenerative conditions such as Alzheimer’s disease. Her work has generated a dozen patents, many of which have been licensed by biomedical companies including two startups, Cognito Therapeutics and Souvien Bio Ltd., that have spun out from her and collaborator’s labs.</p> <p>Her team’s innovations include inhibiting an enzyme that affects the chromatin structure of DNA to rescue gene expression and restore learning and memory, and using light and sound stimulation to enhance the power and synchrony of 40-hertz gamma rhythms in the brain to reduce Alzheimer’s pathology, prevent neuron death, and preserve learning and memory. Each of these promising sets of findings in mice are now being tested in human trials.</p> <p>Tsai and Schuh join 21 colleagues from MIT who have previously been elected NAI fellows.</p> Li-Huei Tsai, left, is the Picower Professor of Neuroscience and director of The Picower Institute for Learning and Memory, and Christopher Schuh, is department head and the Danae and Vasilis Salapatas Professor of Metallurgy in the Department of Materials Science and Engineering.Photos courtesy of the Picower Institute and the Department of Materials Science and EngineeringDMSE, Picower Institute, School of Engineering, School of Science, Awards, honors and fellowships, Faculty, Brain and cognitive sciences, Innovation and Entrepreneurship (I&E), Alzheimer's, Neuroscience, Materials Science and Engineering Helping machines perceive some laws of physics Model registers “surprise” when objects in a scene do something unexpected, which could be used to build smarter AI. Mon, 02 Dec 2019 00:00:00 -0500 Rob Matheson | MIT News Office <p>Humans have an early understanding of the laws of physical reality. Infants, for instance, hold expectations for how objects should move and interact with each other, and will show surprise when they do something unexpected, such as disappearing in a sleight-of-hand magic trick.</p> <p>Now MIT researchers have designed a model that demonstrates an understanding of some basic “intuitive physics” about how objects should behave. The model could be used to help build smarter artificial intelligence and, in turn, provide information to help scientists understand infant cognition.</p> <p>The model, called ADEPT, observes objects moving around a scene and makes predictions about how the objects should behave, based on their underlying physics. While tracking the objects, the model outputs a signal at each video frame that correlates to a level of “surprise” — the bigger the signal, the greater the surprise. If an object ever dramatically mismatches the model’s predictions — by, say, vanishing or teleporting across a scene — its surprise levels will spike.</p> <p>In response to videos showing objects moving in physically plausible and implausible ways, the model registered levels of surprise that matched levels reported by humans who had watched the same videos. &nbsp;</p> <p>“By the time infants are 3 months old, they have some notion that objects don’t wink in and out of existence, and can’t move through each other or teleport,” says first author Kevin A. Smith, a research scientist in the Department of Brain and Cognitive Sciences (BCS) and a member of the Center for Brains, Minds, and Machines (CBMM). “We wanted to capture and formalize that knowledge to build infant cognition into artificial-intelligence agents. We’re now getting near human-like in the way models can pick apart basic implausible or plausible scenes.”</p> <p>Joining Smith on the paper are co-first authors Lingjie Mei, an undergraduate in the Department of Electrical Engineering and Computer Science, and BCS research scientist Shunyu Yao; Jiajun Wu PhD ’19; CBMM investigator Elizabeth Spelke; Joshua B. Tenenbaum, a professor of computational cognitive science, and researcher in CBMM, BCS, and the Computer Science and Artificial Intelligence Laboratory (CSAIL); and CBMM investigator Tomer D. Ullman PhD ’15.</p> <p><strong>Mismatched realities</strong></p> <p>ADEPT relies on two modules: an “inverse graphics” module that captures object representations from raw images, and a “physics engine” that predicts the objects’ future representations from a distribution of possibilities.</p> <p>Inverse graphics basically extracts information of objects —&nbsp;such as shape, pose, and velocity — from pixel inputs. This module captures frames of video as images and uses inverse graphics to extract this information from objects in the scene. But it doesn’t get bogged down in the details. ADEPT requires only some approximate geometry of each shape to function. In part, this helps the model generalize predictions to new objects, not just those it’s trained on.</p> <p>“It doesn’t matter if an object is rectangle or circle, or if it’s a truck or a duck. ADEPT just sees there’s an object with some position, moving in a certain way, to make predictions,” Smith says. “Similarly, young infants also don’t seem to care much about some properties like shape when making physical predictions.”</p> <p>These coarse object descriptions are fed into a physics engine — software that simulates behavior of physical systems, such as rigid or fluidic bodies, and is commonly used for films, video games, and computer graphics. The researchers’ physics engine “pushes the objects forward in time,” Ullman says. This creates a range of predictions, or a “belief distribution,” for what will happen to those objects in the next frame.</p> <p>Next, the model observes the actual next frame. Once again, it captures the object representations, which it then aligns to one of the predicted object representations from its belief distribution. If the object obeyed the laws of physics, there won’t be much mismatch between the two representations. On the other hand, if the object did something implausible — say, it vanished from behind a wall — there will be a major mismatch.</p> <p>ADEPT then resamples from its belief distribution and notes a very low probability that the object had simply vanished. If there’s a low enough probability, the model registers great “surprise” as a signal spike. Basically, surprise is inversely proportional to the probability of an event occurring. If the probability is very low, the signal spike is very high. &nbsp;</p> <p>“If an object goes behind a wall, your physics engine maintains a belief that the object is still behind the wall. If the wall goes down, and nothing is there, there’s a mismatch,” Ullman says. “Then, the model says, ‘There’s an object in my prediction, but I see nothing. The only explanation is that it disappeared, so that’s surprising.’”</p> <p><strong>Violation of expectations</strong></p> <p>In development psychology, researchers run “violation of expectations” tests in which infants are shown pairs of videos. One video shows a plausible event, with objects adhering to their expected notions of how the world works. The other video is the same in every way, except objects behave in a way that violates expectations in some way. Researchers will often use these tests to measure how long the infant looks at a scene after an implausible action has occurred. The longer they stare, researchers hypothesize, the more they may be surprised or interested in what just happened.</p> <p>For their experiments, the researchers created several scenarios based on classical developmental research to examine the model’s core object knowledge. They employed 60 adults to watch 64 videos of known physically plausible and physically implausible scenarios. Objects, for instance, will move behind a wall and, when the wall drops, they’ll still be there or they’ll be gone. The participants rated their surprise at various moments on an increasing scale of 0 to 100. Then, the researchers showed the same videos to the model. Specifically, the scenarios examined the model’s ability to capture notions of permanence (objects do not appear or disappear for no reason), continuity (objects move along connected trajectories), and solidity (objects cannot move through one another).</p> <p>ADEPT matched humans particularly well on videos where objects moved behind walls and disappeared when the wall was removed. Interestingly, the model also matched surprise levels on videos that humans weren’t surprised by but maybe should have been. For example, in a video where an object moving at a certain speed disappears behind a wall and immediately comes out the other side, the object might have sped up dramatically when it went behind the wall or it might have teleported to the other side. In general, humans and ADEPT were both less certain about whether that event was or wasn’t surprising. The researchers also found traditional neural networks that learn physics from observations — but don’t explicitly represent objects — are far less accurate at differentiating surprising from unsurprising scenes, and their picks for surprising scenes don’t often align with humans.</p> <p>Next, the researchers plan to delve further into how infants observe and learn about the world, with aims of incorporating any new findings into their model. Studies, for example, show that infants up until a certain age actually aren’t very surprised when objects completely change in some ways — such as if a truck disappears behind a wall, but reemerges as a duck.</p> <p>“We want to see what else needs to be built in to understand the world more like infants, and formalize what we know about psychology to build better AI agents,” Smith says.</p> An MIT-invented model demonstrates an understanding of some basic “intuitive physics” by registering “surprise” when objects in simulations move in unexpected ways, such as rolling behind a wall and not reappearing on the other side.Image: Christine Daniloff, MITResearch, Computer science and technology, Algorithms, Artificial intelligence, Machine learning, Computer vision, Computer Science and Artificial Intelligence Laboratory (CSAIL), Brain and cognitive sciences, Electrical Engineering & Computer Science (eecs), School of Engineering, Center for Brains Minds and Machines Bot can beat humans in multiplayer hidden-role games Using deductive reasoning, the bot identifies friend or foe to ensure victory over humans in certain online games. Tue, 19 Nov 2019 23:59:59 -0500 Rob Matheson | MIT News Office <p>MIT researchers have developed a bot equipped with artificial intelligence that can beat human players in tricky online multiplayer games where player roles and motives are kept secret.</p> <p>Many gaming bots have been built to keep up with human players. Earlier this year, a team from Carnegie Mellon University developed the world’s first bot that can beat professionals in multiplayer poker. DeepMind’s AlphaGo made headlines in 2016 for besting a professional Go player. Several bots have also been built to beat professional chess players or join forces in cooperative games such as online capture the flag. In these games, however, the bot knows its opponents and teammates from the start.</p> <p>At the Conference on Neural Information Processing Systems next month, the researchers will present DeepRole, the first gaming bot that can win online multiplayer games in which the participants’ team allegiances are initially unclear. The bot is designed with novel “deductive reasoning” added into an AI algorithm commonly used for playing poker. This helps it reason about partially observable actions, to determine the probability that a given player is a teammate or opponent. In doing so, it quickly learns whom to ally with and which actions to take to ensure its team’s victory.</p> <p>The researchers pitted DeepRole against human players in more than 4,000 rounds of the online game “The Resistance: Avalon.” In this game, players try to deduce their peers’ secret roles as the game progresses, while simultaneously hiding their own roles. As both a teammate and an opponent, DeepRole consistently outperformed human players.</p> <p>“If you replace a human teammate with a bot, you can expect a higher win rate for your team. Bots are better partners,” says first author Jack Serrino ’18, who majored in electrical engineering and computer science at MIT and is an avid online “Avalon” player.</p> <p>The work is part of a broader project to better model how humans make socially informed decisions. Doing so could help build robots that better understand, learn from, and work with humans.</p> <p>“Humans learn from and cooperate with others, and that enables us to achieve together things that none of us can achieve alone,” says co-author Max Kleiman-Weiner, a postdoc in the Center for Brains, Minds and Machines and the Department of Brain and Cognitive Sciences at MIT, and at Harvard University. “Games like ‘Avalon’ better mimic the dynamic social settings humans experience in everyday life. You have to figure out who’s on your team and will work with you, whether it’s your first day of kindergarten or another day in your office.”</p> <p>Joining Serrino and Kleiman-Weiner on the paper are David C. Parkes of Harvard and Joshua B. Tenenbaum, a professor of computational cognitive science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory and the Center for Brains, Minds and Machines.</p> <p><strong>Deductive bot</strong></p> <p>In “Avalon,” three players are randomly and secretly assigned to a “resistance” team and two players to a “spy” team. Both spy players know all players’ roles. During each round, one player proposes a subset of two or three players to execute a mission. All players simultaneously and publicly vote to approve or disapprove the subset. If a majority approve, the subset secretly determines whether the mission will succeed or fail. If two “succeeds” are chosen, the mission succeeds; if one “fail” is selected, the mission fails. Resistance players must always choose to succeed, but spy players may choose either outcome. The resistance team wins after three successful missions; the spy team wins after three failed missions.</p> <p>Winning the game basically comes down to deducing who is resistance or spy, and voting for your collaborators. But that’s actually more computationally complex than playing chess and poker. “It’s a game of imperfect information,” Kleiman-Weiner says. “You’re not even sure who you’re against when you start, so there’s an additional discovery phase of finding whom to cooperate with.”</p> <p>DeepRole uses a game-planning algorithm called “counterfactual regret minimization” (CFR) — which learns to play a game by repeatedly playing against itself — augmented with deductive reasoning. At each point in a game, CFR looks ahead to create a decision “game tree” of lines and nodes describing the potential future actions of each player. Game trees represent all possible actions (lines) each player can take at each future decision point. In playing out potentially billions of game simulations, CFR notes which actions had increased or decreased its chances of winning, and iteratively revises its strategy to include more good decisions. Eventually, it plans an optimal strategy that, at worst, ties against any opponent.</p> <p>CFR works well for games like poker, with public actions — such as betting money and folding a hand — but it struggles when actions are secret. The researchers’ CFR combines public actions and consequences of private actions to determine if players are resistance or spy.</p> <p>The bot is trained by playing against itself as both resistance and spy. When playing an online game, it uses its game tree to estimate what each player is going to do. The game tree represents a strategy that gives each player the highest likelihood to win as an assigned role. The tree’s nodes contain “counterfactual values,” which are basically estimates for a payoff that player receives if they play that given strategy.</p> <p>At each mission, the bot looks at how each person played in comparison to the game tree. If, throughout the game, a player makes enough decisions that are inconsistent with the bot’s expectations, then the player is probably playing as the other role. Eventually, the bot assigns a high probability for each player’s role. These probabilities are used to update the bot’s strategy to increase its chances of victory.</p> <p>Simultaneously, it uses this same technique to estimate how a third-person observer might interpret its own actions. This helps it estimate how other players may react, helping it make more intelligent decisions. “If it’s on a two-player mission that fails, the other players know one player is a spy. The bot probably won’t propose the same team on future missions, since it knows the other players think it’s bad,” Serrino says.</p> <p><strong>Language: The next frontier</strong></p> <p>Interestingly, the bot did not need to communicate with other players, which is usually a key component of the game. “Avalon” enables players to chat on a text module during the game. “But it turns out our bot was able to work well with a team of other humans while only observing player actions,” Kleiman-Weiner says. “This is interesting, because one might think games like this require complicated communication strategies.”</p> <p>“I was thrilled to see this paper when it came out,” says Michael Bowling, a professor at the University of Alberta whose research focuses, in part, on training computers to play games. “It is really exciting seeing the ideas in DeepStack see broader application outside of poker. [DeepStack has] been so central to AI in chess and Go to situations of imperfect information. But I still wasn't expecting to see it extended so quickly into the situation of a hidden role game like Avalon. Being able to navigate a social deduction scenario, which feels so quintessentially human, is a really important step. There is still much work to be done, especially when the social interaction is more open ended, but we keep seeing that many of the fundamental AI algorithms with self-play learning can go a long way.”</p> <p>Next, the researchers may enable the bot to communicate during games with simple text, such as saying a player is good or bad. That would involve assigning text to the correlated probability that a player is resistance or spy, which the bot already uses to make its decisions. Beyond that, a future bot might be equipped with more complex communication capabilities, enabling it to play language-heavy social-deduction games — such as a popular game “Werewolf” —which involve several minutes of arguing and persuading other players about who’s on the good and bad teams.</p> <p>“Language is definitely the next frontier,” Serrino says. “But there are many challenges to attack in those games, where communication is so key.”</p> DeepRole, an MIT-invented gaming bot equipped with “deductive reasoning,” can beat human players in tricky online multiplayer games where player roles and motives are kept secret.Research, Computer science and technology, Algorithms, Video games, Artificial intelligence, Machine learning, Language, Computer Science and Artificial Intelligence Laboratory (CSAIL), Brain and cognitive sciences, Electrical Engineering & Computer Science (eecs), School of Engineering Students push to speed up artificial intelligence adoption in Latin America To help the region catch up, students organize summit to bring Latin policymakers and researchers to MIT. Tue, 19 Nov 2019 16:30:01 -0500 Kim Martineau | MIT Quest for Intelligence <p>Omar Costilla Reyes reels off all the ways that artificial intelligence might benefit his native Mexico. It could raise living standards, he says, lower health care costs, improve literacy and promote greater transparency and accountability in government.</p> <p>But Mexico, like many of its Latin American neighbors, has failed to invest as heavily in AI as other developing countries. That worries <a href="">Costilla Reyes</a>, a postdoc at MIT’s Department of Brain and Cognitive Sciences.</p> <p>To give the region a nudge, Costilla Reyes and three other MIT graduate students — <a href="" target="_blank">Guillermo Bernal</a>, <a href="">Emilia Simison</a> and <a href="">Pedro Colon-Hernandez</a> — have spent the last six months putting together a three-day event that will &nbsp;bring together policymakers and AI researchers in Latin America with AI researchers in the United States. The <a href="">AI Latin American sumMIT</a> will take place in January at the <a href="">MIT Media Lab</a>.</p> <p>“Africa is getting lots of support — Africa will eventually catch up,” Costilla Reyes says. “You don’t see anything like that in Latin America, despite the potential for AI to move the region forward socially and economically.”</p> <p><strong>Four paths to MIT and research inspired by AI</strong></p> <p>Each of the four students took a different route to MIT, where AI plays a central role in their work — on the brain, voice assistants, augmented creativity and politics. Costilla Reyes got his first computer in high school, and though it had only dial-up internet access, it exposed him to a world far beyond his home city of Toluca. He studied for a PhD &nbsp;at the University of Manchester, where he developed an <a href="">AI system</a> with applications in security and health to identify individuals by their gait. At MIT, Costilla Reyes is building computational models of how firing neurons in the brain produce memory and cognition, information he hopes can also advance AI.</p> <p>After graduating from a vocational high school in El Salvador, Bernal moved in with relatives in New Jersey and studied English at a nearby community college. He continued on to Pratt Institute, where he learned to incorporate Python into his design work. Now at the MIT Media Lab, he’s developing interactive storytelling tools like <a href="">PaperDreams</a> that uses AI to help people unlock their creativity. His work recently won a <a href="">Schnitzer Prize</a>.&nbsp;</p> <p>Simison came to MIT to study for a PhD in political science after her professors at Argentina’s University Torcuato Di Tella encouraged her to continue her studies in the United States. She is currently using text analysis tools to mine archival records in Brazil and Argentina to understand the role that political parties and unions played under the last dictatorships in both countries.</p> <p>Colon-Hernandez grew up in Puerto Rico fascinated with video games. A robotics class in high school inspired him to build a computer to play video games of his own, which led to a degree in computer engineering at the University of Puerto Rico at Mayagüez.&nbsp;After helping a friend with a project at MIT Lincoln Laboratory, Colon-Hernandez applied to a summer research program at MIT, and later, the MIT Media Lab’s graduate program. He’s currently working on intelligent voice assistants.</p> <p>It’s hard to generalize about a region as culturally diverse and geographically vast as Latin America, stretching from Mexico and the Caribbean to the tip of South America. But protests, violence and reports of entrenched corruption have dominated the news for years, and the average income per person has been <a href="">falling</a> with respect to the United States since the 1950s. All four students see AI as a means to bring stability and greater opportunity to their home countries.</p> <p><strong>AI with a humanitarian agenda</strong></p> <p>The idea to bring Latin American policymakers to MIT was hatched last December, at the world’s premier conference for AI research, <a href="">NeurIPS</a>. The organizers of NeurIPS had launched several new workshops to promote diversity in response to growing criticism of the exclusion of women and minorities in tech. At <a href="">Latinx,</a> a workshop for Latin American students, Costilla Reyes met Colon-Hernandez, who was giving a talk on voice-activated wearables. A few hours later they began drafting a plan to bring a Latinx-style event to MIT.</p> <p>Back in Cambridge, they found support from <a href="">Armando Solar-Lezama</a>, a <a href="">native of Mexico</a> and a professor at MIT’s <a href="">Department of Electrical Engineering and Computer Science</a>. They also began knocking on doors for funding, securing an initial $25,000 grant from MIT’s <a href="">Institute Community and Equity Office</a>. Other graduate students joined the cause, including, and together they set out to recruit speakers, reserve space at the MIT Media Lab and design a website. RIMAC, the MIT-IBM Watson AI Lab, X Development, and Facebook have all since offered support for the event.</p> <p>Unlike other AI conferences, this one has a practical bent, with themes that echo many of the UN Sustainable Development Goals: to end extreme poverty, develop quality education, create fair and transparent institutions, address climate change and provide good health.</p> <p>The students have set similarly concrete goals for the conference, from mapping the current state of AI-adoption across Latin America to outlining steps policymakers can take to coordinate efforts. U.S. researchers will offer tutorials on open-source AI platforms like TensorFlow and scikit-learn for Python, and the students are continuing to raise money to fly 10 of their counterparts from Latin America to attend the poster session.</p> <p>“We reinvent the wheel so much of the time,” says Simison. “If we can motivate countries to integrate their efforts, progress could move much faster.”</p> <p>The potential rewards are high. A <a href="">2017 report</a> by Accenture estimated that if AI were integrated into South America’s top five economies — Argentina, Brazil, Chile, Colombia and Peru — which generate about 85 percent of the continent’s economic output, they could each add up to 1 percent to their annual growth rate.</p> <p>In developed countries like the U.S. and in Europe, AI is sometimes viewed apprehensively for its potential to eliminate jobs, spread misinformation and perpetuate bias and inequality. But the risk of not embracing AI, especially in countries that are already lagging behind economically, is potentially far greater, says Solar-Lezama. “There’s an urgency to make sure these countries have a seat at the table and can benefit from what will be one of the big engines for economic development in the future,” he says.</p> <p>Post-conference deliverables include a set of recommendations for policymakers to move forward. “People are protesting across the entire continent due to the marginal living conditions that most face,” says Costilla Reyes. “We believe that AI plays a key role now, and in the future development of the region, if it’s used in the right way.”</p> “We believe that AI plays a key role now, and in the future development of the region, if it’s used in the right way,” says Omar Costilla Reyes, one of four MIT graduate students working to help Latin America adopt artificial intelligence technologies. Pictured here (left to right) are Costilla Reyes, Emilia Simison, Pedro Antonio Colon-Hernandez, and Guillermo Bernal.Photo: Kim MartineauQuest for Intelligence, Electrical engineering and computer science (EECS), Media Lab, Brain and cognitive sciences, Lincoln Laboratory, MIT-IBM Watson AI Lab, School of Engineering, School of Science, School of Humanities Arts and Social Sciences, Artificial intelligence, Computer science and technology, Technology and society, Machine learning, Software, Algorithms, Political science, Latin America School of Science appoints 14 faculty members to named professorships Those selected for these positions receive additional support to pursue their research and develop their careers. Mon, 04 Nov 2019 11:50:01 -0500 School of Science <p>The <a href="">School of Science</a> has announced that 14 of its faculty members have been appointed to named professorships. The faculty members selected for these positions receive additional support to pursue their research and develop their careers.</p> <p><a href="">Riccardo Comin</a> is an assistant professor in the Department of Physics. He has been named a Class of 1947 Career Development Professor. This three-year professorship is granted in recognition of the recipient's outstanding work in both research and teaching. Comin is interested in condensed matter physics. He uses experimental methods to synthesize new materials, as well as analysis through spectroscopy and scattering to investigate solid state physics. Specifically, the Comin lab attempts to discover and characterize electronic phases of quantum materials. Recently, his lab, in collaboration with colleagues, discovered that weaving a conductive material into a particular pattern known as the “kagome” pattern can result in quantum behavior when electricity is passed through.</p> <p><a href="">Joseph Davis</a>, assistant professor in the Department of Biology, has been named a Whitehead Career Development Professor. He looks at how cells build and deconstruct complex molecular machinery. The work of his lab group relies on biochemistry, biophysics, and structural approaches that include spectrometry and microscopy. A current project investigates the formation of the ribosome, an essential component in all cells. His work has implications for metabolic engineering, drug delivery, and materials science.</p> <p><a href="">Lawrence Guth</a> is now the Claude E. Shannon (1940) Professor of Mathematics. Guth explores harmonic analysis and combinatorics, and he is also interested in metric geometry and identifying connections between geometric inequalities and topology. The subject of metric geometry revolves around being able to estimate measurements, including length, area, volume and distance, and combinatorial geometry is essentially the estimation of the intersection of patterns in simple shapes, including lines and circles.</p> <p><a href="">Michael Halassa</a>, an assistant professor in the Department of Brain and Cognitive Sciences, will hold the three-year Class of 1958 Career Development Professorship. His area of interest is brain circuitry. By investigating the networks and connections in the brain, he hopes to understand how they operate — and identify any ways in which they might deviate from normal operations, causing neurological and psychiatric disorders. Several publications from his lab discuss improvements in the treatment of the deleterious symptoms of autism spectrum disorder and schizophrenia, and his latest news provides insights on how the brain filters out distractions, particularly noise. Halassa is an associate investigator at the McGovern Institute for Brain Research and an affiliate member of the Picower Institute for Learning and Memory.</p> <p><a href="">Sebastian Lourido</a>, an assistant professor and the new Latham Family Career Development Professor in the Department of Biology for the next three years, works on treatments for infectious disease by learning about parasitic vulnerabilities. Focusing on human pathogens, Lourido and his lab are interested in what allows parasites to be so widespread and deadly, looking on a molecular level. This includes exploring how calcium regulates eukaryotic cells, which, in turn, affect processes such as muscle contraction and membrane repair, in addition to kinase responses.</p> <p><a href="">Brent Minchew</a> is named a Cecil and Ida Green Career Development Professor for a three-year term. Minchew, a faculty member in the Department of Earth, Atmospheric and Planetary Sciences, studies glaciers using modeling and remote sensing methods, such as interferometric synthetic aperture radar. His research into glaciers, including their mechanics, rheology, and interactions with their surrounding environment, extends as far as observing their responses to climate change. His group recently determined that Antarctica, in a worst-case scenario climate projection, would not contribute as much as predicted to rising sea level.</p> <p><a href="">Elly Nedivi</a>, a professor in the departments of Brain and Cognitive Sciences and Biology, has been named the <a href="">inaugural</a> William R. (1964) And Linda R. Young Professor. She works on brain plasticity, defined as the brain’s ability to adapt with experience, by identifying genes that play a role in plasticity and their neuronal and synaptic functions. In one of her lab’s recent publications, they suggest that variants of a particular gene may undermine expression or production of a protein, increasing the risk of bipolar disorder. In addition, she collaborates with others at MIT to develop new microscopy tools that allow better analysis of brain connectivity. Nedivi is also a member of the Picower Institute for Learning and Memory.</p> <p><a href="">Andrei Negu</a><a href="" target="_blank">ț</a> has been named a Class of 1947 Career Development Professor for a three-year term. Neguț, a member of the Department of Mathematics, fixates on problems in geometric representation theory. This topic requires investigation within algebraic geometry and representation theory simultaneously, with implications for mathematical physics, symplectic geometry, combinatorics and probability theory.</p> <p><a href="">Matĕj Peč</a>, the Victor P. Starr Career Development Professor in the Department of Earth, Atmospheric and Planetary Science until 2021, studies how the movement of the Earth’s tectonic plates affects rocks, mechanically and microstructurally. To investigate such a large-scale topic, he utilizes high-pressure, high-temperature experiments in a lab to simulate the driving forces associated with plate motion, and compares results with natural observations and theoretical modeling. His lab has identified a particular boundary beneath the Earth’s crust where rock properties shift from brittle, like peanut brittle, to viscous, like honey, and determined how that layer accommodates building strain between the two. In his investigations, he also considers the effect on melt generation miles underground.</p> <p><a href="">Kerstin Perez</a> has been named the three-year Class of 1948 Career Development Professor in the Department of Physics. Her research interest is dark matter. She uses novel analytical tools, such as those affixed on a balloon-borne instrument that can carry out processes similar to that of a particle collider (like the Large Hadron Collider) to detect new particle interactions in space with the help of cosmic rays. In another research project, Perez uses a satellite telescope array on Earth to search for X-ray signatures of mysterious particles. Her work requires heavy involvement with collaborative observatories, instruments, and telescopes. Perez is affiliated with the Kavli Institute for Astrophysics and Space Research.</p> <p><a href="">Bjorn Poonen</a>, named a Distinguished Professor of Science in the Department of Mathematics, studies number theory and algebraic geometry. He and his colleagues generate algorithms that can solve polynomial equations with the particular requirement that the solutions be rational numbers. These types of problems can be useful in encoding data. He also helps to determine what is undeterminable, that is exploring the limits of computing.</p> <p><a href="">Daniel Suess</a>, named a Class of 1948 Career Development Professor in the Department of Chemistry, uses molecular chemistry to explain global biogeochemical cycles. In the fields of inorganic and biological chemistry, Suess and his lab look into understanding complex and challenging reactions and clustering of particular chemical elements and their catalysts. Most notably, these reactions include those that are essential to solar fuels. Suess’s efforts to investigate both biological and synthetic systems have broad aims of both improving human health and decreasing environmental impacts.</p> <p><a href="">Alison Wendlandt</a> is the new holder of the five-year Cecil and Ida Green Career Development Professorship. In the Department of Chemistry, the Wendlandt research group focuses on physical organic chemistry and organic and organometallic synthesis to develop reaction catalysts. Her team fixates on designing new catalysts, identifying processes to which these catalysts can be applied, and determining principles that can expand preexisting reactions. Her team’s efforts delve into the fields of synthetic organic chemistry, reaction kinetics, and mechanics.</p> <p><a href="">Julien de Wit</a>, a Department of Earth, Atmospheric and Planetary Sciences assistant professor, has been named a Class of 1954 Career Development Professor. He combines math and science to answer questions about big-picture planetary questions. Using data science, de Wit develops new analytical techniques for mapping exoplanetary atmospheres, studies planet-star interactions of planetary systems, and determines atmospheric and planetary properties of exoplanets from spectroscopic information. He is a member of the scientific team involved in the Search for habitable Planets EClipsing ULtra-cOOl Stars (SPECULOOS) and the TRANsiting Planets and Planetesimals Small Telescope (TRAPPIST), made up of an international collection of observatories. He is affiliated with the Kavli Institute.</p> Clockwise from top left: Riccardo Comin, Joseph Davis, Lawrence Guth, Michael Halassa, Sebastian Lourido, Brent Minchew, Elly Nedivi, Andrei Neguț, Matĕj Peč, Kerstin Perez, Bjorn Poonen, Daniel Suess, Alison Wendlandt, and Julien de Wit Photos courtesy of the faculty.School of Science, Physics, Biology, Mathematics, Brain and cognitive sciences, McGovern Institute, Picower Institute, EAPS, Kavli Institute, Chemistry, Faculty, Awards, honors and fellowships MIT announces framework to guide negotiations with publishers Principle-based framework aims to support the needs of scholars, reflect MIT principles, and advance science. Wed, 23 Oct 2019 10:55:01 -0400 Brigham Fay | MIT Libraries <p>The MIT Libraries, together with the MIT Committee on the Library System and the Ad Hoc Task Force on Open Access to MIT’s Research, announced that it has developed a <a href="" target="_blank">principle-based framework</a> to guide negotiations with scholarly publishers. The framework emerges directly from the core principles for open science and open scholarship articulated in the recommendations of the <a href="" target="_blank">Task Force on Open Access to MIT’s Research</a>, which released its <a href="" target="_self">final report</a> to the MIT community on Oct. 17.</p> <p>The framework affirms the overarching principle that control of scholarship and its dissemination should reside with scholars and their institutions. It aims to ensure that scholarly research outputs are openly and equitably available to the broadest possible audience, while also providing valued services to the MIT community.</p> <p>“The value of scholarly content primarily comes from researchers, authors, and peer reviewers — the people who are creating knowledge and reviewing and improving it,” says Roger Levy, associate professor of brain and cognitive sciences and chair of the Committee on the Library System. “We think authors should have considerable rights to their own intellectual outputs.”</p> <p>In MIT’s model, institutions and scholars maintain the rights to share their work openly via institutional repositories, and publishers are paid for the services valued by authors and readers, such as curation and peer-review management.&nbsp;&nbsp;</p> <p>“The MIT Framework gives us a starting point for imagining journals as a service,” says Chris Bourg, director of the MIT Libraries.</p> <p>The framework was developed by members of the Open Access Task Force, the Committee on the Library System, and MIT Libraries staff, and vetted by faculty groups across the Institute.</p> <p>“The ideas in the framework are not new for MIT, which has been a leader in sharing its knowledge with the world,” says Bourg. “This is a clear articulation by the MIT faculty of what they want in scholarly communications — a scholar-led, open, and equitable environment that promises to advance knowledge and its applications. It is also a model that we think will be appealing for a diverse range of scholarly institutions, from private research-intensive universities like MIT to small liberal arts colleges and large public universities.”</p> <p>“The six core principles of the MIT Framework free researchers and research institutions to follow their own lights in sharing their research, and help ensure that scholarly communities will retain control of scholarly communication,” says Peter Suber, director of the Harvard University Library Office for Scholarly Communication.&nbsp;</p> <p>While MIT intends to rely on this framework as a guide for relationships with publishers regardless of the actions of any peer institutions or other organizations, institutions ranging from large research universities to liberal arts colleges have decided to endorse the framework in recognition of its potential to advance open scholarship and the public good.</p> <p>“The MIT Framework values the labor and rights of authors, while respecting a role for journals and publishers,” says Janelle Wertzberger, assistant dean and director of scholarly communications at Gettysburg College. “It balances author rights with user benefits by ensuring that published research will reach the widest possible audience. This approach aims to realign the current publishing system with the needs of all stakeholders within the system, and thereby creates positive change for all.”</p> <p>A full list of endorsers is available at <a href="" target="_blank"></a>.<em> </em>Additional institutions are also invited to add their endorsement on this page.&nbsp;</p> <p>MIT originally passed its <a href="" target="_blank">Faculty Open Access Policy</a> in 2009; it was one of the first in the country and the first to be adopted university-wide. Today close to 50 percent of MIT faculty-authored journal articles are freely available in <a href="" target="_blank">DSpace@MIT</a>, the Institute’s repository.</p> The MIT Libraries, together with the MIT Committee on the Library System and the Ad Hoc Task Force on Open Access to MIT’s Research, has developed a principle-based framework to guide negotiations with scholarly publishers. Photo: Jake BelcherLibraries, Open access, Research, Brain and cognitive sciences, School of Science Drug combination reverses hypersensitivity to noise Findings in mice suggest targeting certain brain circuits could offer new ways to treat some neurological disorders. Mon, 21 Oct 2019 10:59:59 -0400 Anne Trafton | MIT News Office <p>People with autism often experience hypersensitivity to noise and other sensory input. MIT neuroscientists have now identified two brain circuits that help tune out distracting sensory information, and they have found a way to reverse noise hypersensitivity in mice by boosting the activity of those circuits.</p> <p>One of the circuits the researchers identified is involved in filtering noise, while the other exerts top-down control by allowing the brain to switch its attention between different sensory inputs.</p> <p>The researchers showed that restoring the function of both circuits worked much better than treating either circuit alone. This demonstrates the benefits of mapping and targeting multiple circuits involved in neurological disorders, says Michael Halassa, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.</p> <p>“We think this work has the potential to transform how we think about neurological and psychiatric disorders, [so that we see them] as a combination of circuit deficits,” says Halassa, the senior author of the study. “The way we should approach these brain disorders is to map, to the best of our ability, what combination of deficits are there, and then go after that combination.”</p> <p>MIT postdoc Miho Nakajima and research scientist L. Ian Schmitt are the lead authors of the paper, which appears in <em>Neuron</em> on Oct. 21. Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of the McGovern Institute, is also an author of the paper.</p> <p><strong>Hypersensitivity</strong></p> <p>Many gene variants have been linked with autism, but most patients have very few, if any, of those variants. One of those genes is ptchd1, which is mutated in about 1 percent of people with autism. In a <a href="">2016 study</a>, Halassa and Feng found that during development this gene is primarily expressed in a part of the thalamus called the thalamic reticular nucleus (TRN).&nbsp;</p> <p>That study revealed that neurons of the TRN help the brain to adjust to changes in sensory input, such as noise level or brightness. In mice with ptchd1 missing, TRN neurons fire too fast, and they can’t adjust when noise levels change. This prevents the TRN from performing its usual sensory filtering function, Halassa says.</p> <p>“Neurons that are there to filter out noise, or adjust the overall level of activity, are not adapting. Without the ability to fine-tune the overall level of activity, you can get overwhelmed very easily,” he says.</p> <p>In the 2016 study, the researchers also found that they could restore some of the mice’s noise filtering ability by treating them with a drug called EBIO that activates neurons’ potassium channels. EBIO has harmful cardiac side effects so likely could not be used in human patients, but other drugs that boost TRN activity may have a similar beneficial effect on hypersensitivity, Halassa says.</p> <p>In the new <em>Neuron</em> paper, the researchers delved more deeply into the effects of ptchd1, which is also expressed in the prefrontal cortex. To explore whether the prefrontal cortex might play a role in the animals’ hypersensitivity, the researchers used a task in which mice have to distinguish between three different tones, presented with varying amounts of background noise.</p> <p>Normal mice can learn to use a cue that alerts them whenever the noise level is going to be higher, improving their overall performance on the task. A similar phenomenon is seen in humans, who can adjust better to noisier environments when they have some advance warning, Halassa says. However, mice with the ptchd1 mutation were unable to use these cues to improve their performance, even when their TRN deficit was treated with EBIO.</p> <p>This suggested that another brain circuit must be playing a role in the animals’ ability to filter out distracting noise. To test the possibility that this circuit is located in the prefrontal cortex, the researchers recorded from neurons in that region while mice lacking ptch1 performed the task. They found that neuronal activity died out much faster in these mice than in the prefrontal cortex of normal mice. That led the researchers to test another drug, known as modafinil, which is FDA-approved to treat narcolepsy and is sometimes prescribed to improve memory and attention.</p> <p>The researchers found that when they treated mice missing ptchd1 with both modafinil and EBIO, their hypersensitivity disappeared, and their performance on the task was the same as that of normal mice.</p> <p><strong>Targeting circuits</strong></p> <p>This successful reversal of symptoms suggests that the mice missing ptchd1 experience a combination of circuit deficits that each contribute differently to noise hypersensitivity. One circuit filters noise, while the other helps to control noise filtering based on external cues. Ptch1 mutations affect both circuits, in different ways that can be treated with different drugs.</p> <p>Both of those circuits could also be affected by other genetic mutations that have been linked to autism and other neurological disorders, Halassa says. Targeting those circuits, rather than specific genetic mutations, may offer a more effective way to treat such disorders, he says.</p> <p>“These circuits are important for moving things around the brain — sensory information, cognitive information, working memory,” he says. “We’re trying to reverse-engineer circuit operations in the service of figuring out what to do about a real human disease.”</p> <p>He now plans to study circuit-level disturbances that arise in schizophrenia. That disorder affects circuits involving cognitive processes such as inference — the ability to draw conclusions from available information.</p> <p>The research was funded by the Simons Center for the Social Brain at MIT, the Stanley Center for Psychiatric Research at the Broad Institute, the McGovern Institute for Brain Research at MIT, the Pew Foundation, the Human Frontiers Science Program, the National Institutes of Health, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, a Japan Society for the Promotion of Science Fellowship, and a National Alliance for the Research of Schizophrenia and Depression Young Investigator Award.</p> MIT neuroscientists have identified two brain circuits that help tune out distracting sensory information.Image: MIT NewsResearch, Autism, Brain and cognitive sciences, McGovern Institute, School of Science, Neuroscience, National Institutes of Health (NIH) Open access task force releases final recommendations Report urges MIT community to openly share the products of its research and teaching. Thu, 17 Oct 2019 15:00:01 -0400 Brigham Fay | MIT Libraries <p>The&nbsp;<a href="" target="_blank">Ad Hoc Task Force on Open Access to MIT’s Research</a>&nbsp;has released its <a href="" target="_blank">final recommendations</a>, which aim to support and increase the open sharing of MIT publications, data, software, and educational materials.&nbsp;</p> <p>The Institute-wide open access (OA) task force, convened by Provost Martin Schmidt in July 2017, was charged with exploring how MIT should update and revise MIT’s current OA policies to “further the Institute’s mission of disseminating the fruits of its research and scholarship as widely as possible.” A draft set of recommendations was released in March 2019 for public comment, and this valuable input provided by the community was incorporated into the final recommendations.&nbsp;</p> <p>“In 2009, MIT made a bold statement when it passed one of the country’s first faculty open access policies and the first to be university-wide,” says MIT Libraries Director Chris Bourg, co-chair of the task force with Hal Abelson, Class of 1922 Professor of Electrical Engineering and Computer Science. “Ten years later, we remain convinced that openly sharing research and educational materials is key to the MIT mission of advancing knowledge and bringing that knowledge to bear on the world’s greatest challenges. Through the course of our work, the task force heard from MIT community members who are passionate about extending the reach of their work, and we feel our recommendations provide policies and infrastructure to support that.”</p> <p>The recommendations include ratifying an Institute-wide set of principles for open science and open scholarship, which affirm MIT’s larger commitment to the idea that scholarship and its dissemination should remain in the hands of researchers and their institutions. The MIT Libraries are working with the task force and the Committee on the Library System to develop a framework for negotiations with publishers based on these principles.&nbsp;</p> <p>Recommendations to broaden the MIT Faculty Open Access Policy to cover all MIT authors and to adopt an OA policy for monographs received widespread support across the Institute and in the broader community. The task force also calls for heads of departments, labs, and centers to develop discipline-specific plans to encourage and support open sharing. The libraries have already begun working with the departments of Linguistics and Philosophy and Brain and Cognitive Sciences to develop sample plans.&nbsp;</p> <p>“Scholarship serves humanity best when it is available to everyone,” says Abelson. “These recommendations reinforce MIT's leadership in open access to scholarship.”</p> <p>In an email to the MIT community, Provost Martin Schmidt announced that he would appoint an implementation team this fall to prioritize and enact the task force’s recommendations. He has asked Chris Bourg to convene and lead this team.</p> The Ad Hoc Task Force on Open Access to MIT’s Research has released its final recommendations, which aim to support and increase the open sharing of MIT publications, data, software, and educational materials. Photo: Dominick ReuterLibraries, Open access, Linguistics and Philosophy, Brain and cognitive sciences, School of Engineering, Electrical engineering and computer science (EECS), Computer Science and Artificial Intelligence Laboratory (CSAIL), School of Science, School of Humanities Arts and Social Sciences, Digital humanities, Research, Community Controlling our internal world Design principles from robotics help researchers decipher elements controlling mental processes in the brain. Wed, 16 Oct 2019 13:15:01 -0400 Sabbi Lall | McGovern Institute for Brain Research <div> <p>Olympic skaters can launch, perform multiple aerial turns, and land gracefully, anticipating imperfections and reacting quickly to correct course. To make such elegant movements, the brain must have an internal model of the body to control, predict, and make almost-instantaneous adjustments to motor commands. So-called “internal models” are a fundamental concept in engineering and have long been suggested to underlie control of movement by the brain, but what about processes that occur in the absence of movement, such as contemplation, anticipation, planning?</p> <p>Using a novel combination of task design, data analysis, and modeling, MIT neuroscientist <a href="" rel="noopener noreferrer" target="_blank">Mehrdad Jazayeri</a> and colleagues now provide compelling evidence that the core elements of an internal model also control purely mental processes.</p> <p>“During my thesis, I realized that I’m interested not so much in how our senses react to sensory inputs, but instead in how my internal model of the world helps me make sense of those inputs,” says Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.</p> <p>Indeed, understanding the building blocks exerting control of such mental processes could help to paint a better picture of disruptions in mental disorders, such as&nbsp;<a href="" rel="noopener noreferrer" target="_blank">schizophrenia</a>.</p> <p><strong>Internal models for mental processes</strong></p> <p>Scientists working on the motor system have long theorized that the brain overcomes noisy and slow signals using an accurate internal model of the body. This internal model serves three critical functions: it provides motor to control movement, simulates upcoming movement to overcome delays, and uses feedback to make real-time adjustments.</p> <p>“The framework that we currently use to think about how the brain controls our actions is one that we have borrowed from robotics: We use controllers, simulators, and sensory measurements to control machines and train operators,” explains Reza Shadmehr, a professor at the Johns Hopkins School of Medicine who was not involved with the study. “That framework has largely influenced how we imagine our brain controlling our movements.”</p> <p>Jazazyeri and colleagues wondered whether the same framework might explain the control principles governing mental states in the absence of any movement.</p> <p>“When we’re simply sitting, thoughts and images run through our heads and, fundamental to intellect, we can control them,” explains lead author Seth Egger, a former postdoc in the Jazayeri lab who is now at Duke University. “We wanted to find out what’s happening between our ears when we are engaged in thinking.”</p> <p>Imagine, for example, a sign language interpreter keeping up with a fast speaker. To track speech accurately, the translator continuously anticipates where the speech is going, rapidly adjusting when the actual words deviate from the prediction. The interpreter could be using an internal model to anticipate upcoming words, and use feedback to make adjustments on the fly.</p> <p><strong>1-2-3-Go</strong></p> <p>Hypothesizing about how the components of an internal model function in scenarios such as translation is one thing. Cleanly measuring and proving the existence of these elements is much more complicated, as the activity of the controller, simulator, and feedback are intertwined. To tackle this problem, Jazayeri and colleagues devised a clever task with primate models in which the controller, simulator, and feedback act at distinct times.</p> <p>In this task, called “1-2-3-Go,” the animal sees three consecutive flashes (1, 2, and 3) that form a regular beat, and learns to make an eye movement (Go) when they anticipate the 4th flash should occur. During the task, researchers measured neural activity in a region of the frontal cortex they had previously linked to the timing of movement.</p> <p>Jazayeri and colleagues had clear predictions about when the controller would act (between the third flash and “Go”) and when feedback would be engaged (with each flash of light). The key surprise came when researchers saw evidence for the simulator anticipating the third flash. This unexpected neural activity has dynamics that resemble the controller, but was not associated with a response. In other words, the researchers uncovered a covert plan that functions as the simulator, thus uncovering all three elements of an internal model for a mental process, the planning and anticipation of “Go” in the “1-2-3-Go” sequence.</p> <p>“Jazayeri’s work is important because it demonstrates how to study mental simulation in animals,” explains Shadmehr, “and where in the brain that simulation is taking place.”</p> <p>Having found how and where to measure an internal model in action, Jazayeri and colleagues now plan to ask whether these control strategies can explain how primates effortlessly generalize their knowledge from one behavioral context to another. For example, how does an interpreter rapidly adjust when someone with widely different speech habits takes the podium? This line of investigation promises to shed light on high-level mental capacities of the primate brain that simpler animals seem to lack, that go awry in mental disorders, and that designers of artificial intelligence systems so fondly seek.</p> </div> MIT neuroscientists have shown that the core elements of an internal model also control purely mental processes.McGovern Institute, Brain and cognitive sciences, School of Science, Research, Neuroscience, Mental health New method visualizes groups of neurons as they compute Fluorescent probe could allow scientists to watch circuits within the brain and link their activity to specific behaviors. Wed, 09 Oct 2019 12:59:59 -0400 Anne Trafton | MIT News Office <p>Using a fluorescent probe that lights up when brain cells are electrically active, MIT and Boston University researchers have shown that they can image the activity of many neurons at once, in the brains of mice.</p> <p>This technique, which can be performed using a simple light microscope, could allow neuroscientists to visualize the activity of circuits within the brain and link them to specific behaviors, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and a professor of biological engineering and of brain and cognitive sciences at MIT.</p> <p>“If you want to study a behavior, or a disease, you need to image the activity of populations of neurons because they work together in a network,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research, Media Lab, and Koch Institute for Integrative Cancer Research.</p> <p>Using this voltage-sensing molecule, the researchers showed that they could record electrical activity from many more neurons than has been possible with any existing, fully genetically encoded, fluorescent voltage probe.</p> <p>Boyden and Xue Han, an associate professor of biomedical engineering at Boston University, are the senior authors of <a href="" target="_blank">the study</a>, which appears in the Oct. 9 online edition of <em>Nature</em>. The lead authors of the paper are MIT postdoc Kiryl Piatkevich, BU graduate student Seth Bensussen, and BU research scientist Hua-an Tseng.</p> <p><strong>Seeing connections</strong></p> <p>Neurons compute using rapid electrical impulses, which underlie our thoughts, behavior, and perception of the world. Traditional methods for measuring this electrical activity require inserting an electrode into the brain, a process that is labor-intensive and usually allows researchers to record from only one neuron at a time. Multielectrode arrays allow the monitoring of electrical activity from many neurons at once, but they don’t sample densely enough to get all the neurons within a given volume.&nbsp; Calcium imaging does allow such dense sampling, but it measures calcium, an indirect and slow measure of neural electrical activity.</p> <p>In 2018, Boyden’s team developed an <a href="">alternative way</a> to monitor electrical activity by labeling neurons with a fluorescent probe. Using a technique known as directed protein evolution, his group engineered a molecule called Archon1 that can be genetically inserted into neurons, where it becomes embedded in the cell membrane. When a neuron’s electrical activity increases, the molecule becomes brighter, and this fluorescence can be seen with a standard light microscope.</p> <p>In the 2018 paper, Boyden and his colleagues showed that they could use the molecule to image electrical activity in the brains of transparent worms and zebrafish embryos, and also in mouse brain slices. In the new study, they wanted to try to use it in living, awake mice as they engaged in a specific behavior.</p> <p>To do that, the researchers had to modify the probe so that it would go to a subregion of the neuron membrane. They found that when the molecule inserts itself throughout the entire cell membrane, the resulting images are blurry because the axons and dendrites that extend from neurons also fluoresce. To overcome that, the researchers attached a small peptide that guides the probe specifically to membranes of the cell bodies of neurons. They called this modified protein SomArchon.</p> <p>“With SomArchon, you can see each cell as a distinct sphere,” Boyden says. “Rather than having one cell’s light blurring all its neighbors, each cell can speak by itself loudly and clearly, uncontaminated by its neighbors.”</p> <p>The researchers used this probe to image activity in a part of the brain called the striatum, which is involved in planning movement, as mice ran on a ball. They were able to monitor activity in several neurons simultaneously and correlate each one’s activity with the mice’s movement. Some neurons’ activity went up when the mice were running, some went down, and others showed no significant change.</p> <p>“Over the years, my lab has tried many different versions of voltage sensors, and none of them have worked in living mammalian brains until this one,” Han says.</p> <p>Using this fluorescent probe, the researchers were able to obtain measurements similar to those recorded by an electrical probe, which can pick up activity on a very rapid timescale. This makes the measurements more informative than existing techniques such as imaging calcium, which neuroscientists often use as a proxy for electrical activity.</p> <p>“We want to record electrical activity on a millisecond timescale,” Han says. “The timescale and activity patterns that we get from calcium imaging are very different. We really don’t know exactly how these calcium changes are related to electrical dynamics.”</p> <p>With the new voltage sensor, it is also possible to measure very small fluctuations in activity that occur even when a neuron is not firing a spike. This could help neuroscientists study how small fluctuations impact a neuron’s overall behavior, which has previously been very difficult in living brains, Han says.</p> <p>The study “introduces a new and powerful genetic tool” for imaging voltage in the brains of awake mice, says Adam Cohen, a professor of chemistry, chemical biology, and physics at Harvard University.</p> <p>“Previously, researchers had to impale neurons with fine glass capillaries to make electrical recordings, and it was only possible to record from one or two cells at a time.&nbsp;The Boyden team recorded from about 10 cells at a time. That’s a lot of cells,” says Cohen, who was not involved in the research. “These tools open new possibilities to study the statistical structure of neural activity.&nbsp;But a mouse brain contains about 75 million neurons, so we still have a long way to go.”</p> <p><strong>Mapping circuits</strong></p> <p>The researchers also showed that this imaging technique can be combined with <a href="">optogenetics</a> — a technique developed by the Boyden lab and collaborators that allows researchers to turn neurons on and off with light by engineering them to express light-sensitive proteins. In this case, the researchers activated certain neurons with light and then measured the resulting electrical activity in these neurons.</p> <p>This imaging technology could also be combined with <a href="">expansion microscopy</a>, a technique that Boyden’s lab developed to expand brain tissue before imaging it, make it easier to see the anatomical connections between neurons in high resolution.</p> <p>“One of my dream experiments is to image all the activity in a brain, and then use expansion microscopy to find the wiring between those neurons,” Boyden says. “Then can we predict how neural computations emerge from the wiring.”</p> <p>Such wiring diagrams could allow researchers to pinpoint circuit abnormalities that underlie brain disorders, and may also help researchers to design artificial intelligence that more closely mimics the human brain, Boyden says.</p> <p>The MIT portion of the research was funded by Edward and Kay Poitras, the National Institutes of Health, including a Director’s Pioneer Award, Charles Hieken, John Doerr, the National Science Foundation, the HHMI-Simons Faculty Scholars Program, the Human Frontier Science Program, and the U.S. Army Research Office.</p> In the top row, neurons are labeled with a fluorescent probe that reveals electrical activity. In the bottom row, neurons are labeled with a variant of the probe that accumulates specifically in the neuron cell bodies, preventing interference from axons of neighboring neurons. Image courtesy of the researchersResearch, Brain and cognitive sciences, Media Lab, Biological engineering, McGovern Institute, Koch Institute, School of Engineering, School of Science, School of Architecture and Planning, National Institutes of Health (NIH), National Science Foundation (NSF), Neuroscience Alzheimer’s plaque emerges early and deep in the brain Clumps of amyloid protein emerge early in deep regions, such as the mammillary body, and march outward in the brain along specific circuits. Tue, 08 Oct 2019 12:00:01 -0400 David Orenstein | Picower Institute <p>Long before symptoms like memory loss even emerge, the underlying pathology of Alzheimer’s disease, such as an accumulation of amyloid protein plaques, is well underway in the brain. A longtime goal of the field has been to understand where it starts so that future interventions could begin there. A new study by MIT neuroscientists at The Picower Institute for Learning and Memory could help those efforts by pinpointing the regions with the earliest emergence of amyloid in the brain of a prominent mouse model of the disease. Notably, the study also shows that the degree of amyloid accumulation in one of those same regions of the human brain correlates strongly with the progression of the disease.</p> <div class="cms-placeholder-content-video"></div> <p>“Alzheimer’s is a neurodegenerative disease, so in the end you can see a lot of neuron loss,” says Wen-Chin “Brian” Huang, co-lead author of the study and a postdoc in the lab of co-senior author Li-Huei Tsai, Picower Professor of Neuroscience and director of the Picower Institute. “At that point, it would be hard to cure the symptoms. It’s really critical to understand what circuits and regions show neuronal dysfunction early in the disease. This will, in turn, facilitate the development of effective therapeutics.”</p> <p>In addition to Huang, the study’s co-lead authors are Rebecca Canter, a former member of the Tsai lab, and Heejin Choi, a former member of the lab of co-senior author Kwanghun Chung, associate professor of chemical engineering and a member of the Picower Institute and the MIT Institute for Medical Engineering and Science.</p> <p><strong>Tracking plaques</strong></p> <p>Many research groups have made progress in recent years by tracing amyloid’s path in the brain using technologies such as positron emission tomography, and by looking at brains post-mortem, but the new study in <em>Communications Biology </em>adds substantial new evidence from the 5XFAD mouse model because it presents an unbiased look at the entire brain as early as one month of age. The study reveals that amyloid begins its terrible march in deep brain regions such as the mammillary body, the lateral septum, and the subiculum before making its way along specific brain circuits that ultimately lead it to the hippocampus, a key region for memory, and the cortex, a key region for cognition.</p> <p>The team used SWITCH, a technology developed by Chung, to label amyloid plaques and to clarify the whole brains of 5XFAD mice so that they could be imaged in fine detail at different ages. The team was consistently able to see that plaques first emerged in the deep brain structures and then tracked along circuits, such as the Papez memory circuit, to spread throughout the brain by six-12 months (a mouse’s lifespan is up to three years).</p> <p>The findings help to cement an understanding that has been harder to obtain from human brains, Huang says, because post-mortem dissection cannot easily account for how the disease developed over time and PET scans don’t offer the kind of resolution the new study provides from the mice.</p> <p><strong>Key validations</strong></p> <p>Importantly, the team directly validated a key prediction of their mouse findings in human tissue: If the mammillary body is indeed a very early place that amyloid plaques emerge, then the density of those plaques should increase in proportion with how far advanced the disease is. Sure enough, when the team used SWITCH to examine the mammillary bodies of post-mortem human brains at different stages of the disease, they saw exactly that relationship: The later the stage, the more densely plaque-packed the mammillary body was.</p> <p>“This suggests that human brain alterations in Alzheimer’s disease look similar to what we observe in mouse,” the authors wrote. “Thus we propose that amyloid-beta deposits start in susceptible subcortical structures and spread to increasingly complex memory and cognitive networks with age.”</p> <p>The team also performed experiments to determine whether the accumulation of plaques they observed were of real disease-related consequence for neurons in affected regions. One of the hallmarks of Alzheimer’s disease is a vicious cycle in which amyloid makes neurons too easily excited, and overexcitement causes neurons to produce more amyloid. The team measured the excitability of neurons in the mammillary body of 5XFAD mice and found they were more excitable than otherwise similar mice that did not harbor the 5XFAD set of genetic alterations.</p> <p>In a preview of a potential future therapeutic strategy, when the researchers used a genetic approach to silence the neurons in the mammillary body of some 5XFAD mice but left neurons in others unaffected, the mice with silenced neurons produced less amyloid.</p> <p>While the study findings help explain much about how amyloid spreads in the brain over space and time, they also raise new questions, Huang said. How might the mammillary body affect memory, and what types of cells are most affected there?</p> <p>“This study sets a stage for further investigation of how dysfunction in these brain regions and circuits contributes to the symptoms of Alzheimer’s disease,” he says.</p> <p>In addition to Huang, Canter, Choi, Tsai, and Chung, the paper’s other authors are Jun Wang, Lauren Ashley Watson, Christine Yao, Fatema Abdurrob, Stephanie Bousleiman, Jennie Young, David Bennett and Ivana Dellalle.</p> <p>The National Institutes of Health, the JPB Foundation, Norman B. Leventhal and Barbara Weedon fellowships, The Burroughs Wellcome Fund, the Searle Scholars Program, a Packard Award, a NARSAD Young Investigator Award, and the NCSOFT Cultural Foundation funded the research.</p> A white-stained cluster of amyloid plaque proteins, a hallmark of Alzheimer's disease pathology, is evident in the mammillary body of a 2-month-old Alzheimer's model mouse. A new study finds that plaques begin in such deep regions and spread throughout the brain along specific circuits.Image: Picower InstitutePicower Institute for Learning and Memory, School of Science, School of Engineering, Neuroscience, Alzheimer's, Institute for Medical Engineering and Science (IMES), Brain and cognitive sciences, Disease, Mental health, Research Study: Better sleep habits lead to better college grades Data on MIT students underscore the importance of getting enough sleep; bedtime also matters. Tue, 01 Oct 2019 05:00:00 -0400 David L. Chandler | MIT News Office <p>Two MIT professors have found a strong relationship between students’ grades and how much sleep they’re getting. What time students go to bed and the consistency of their sleep habits also make a big difference. And no, getting a good night’s sleep just before a big test is not good enough — it takes several nights in a row of good sleep to make a difference.</p> <p>Those are among the conclusions from an experiment in which 100 students in an MIT engineering class were given Fitbits, the popular wrist-worn devices that track a person’s activity 24/7, in exchange for the researchers’ access to a semester’s worth of their activity data. The findings — some unsurprising, but some quite unexpected — are reported today in the journal <em>Science of Learning </em>in a paper by MIT postdoc Kana Okano, professors Jeffrey Grossman and John Gabrieli, and two others.</p> <p>One of the surprises was that individuals who went to bed after some particular threshold time — for these students, that tended to be 2 a.m., but it varied from one person to another — tended to perform less well on their tests no matter how much total sleep they ended up getting.</p> <p>The study didn’t start out as research on sleep at all. Instead, Grossman was trying to find a correlation between physical exercise and the academic performance of students in his class 3.091 (Introduction to Solid-State Chemistry). In addition to having 100 of the students wear Fitbits for the semester, he also enrolled about one-fourth of them in an intense fitness class in MIT’s Department of Athletics, Physical Education, and Recreation, with the help of assistant professors Carrie Moore and Matthew Breen, who created the class specifically for this study. The thinking was that there might be measurable differences in test performance between the two groups.</p> <p>There wasn’t. Those without the fitness classes performed just as well as those who did take them. “What we found at the end of the day was zero correlation with fitness, which I must say was disappointing since I believed, and still believe, there is a tremendous positive impact of exercise on cognitive performance,” Grossman says.</p> <p>He speculates that the intervals between the fitness program and the classes may have been too long to show an effect. But meanwhile, in the vast amount of data collected during the semester, some other correlations did become obvious. While the devices weren’t explicitly monitoring sleep, the Fitbit program’s proprietary algorithms did detect periods of sleep and changes in sleep quality, primarily based on lack of activity.</p> <p>These correlations were not at all subtle, Grossman says. There was essentially a straight-line relationship between the average amount of sleep a student got and their grades on the 11 quizzes, three midterms, and final exam, with the grades ranging from A’s to C’s. “There’s lots of scatter, it’s a noisy plot, but it’s a straight line,” he says. The fact that there was a correlation between sleep and performance wasn’t surprising, but the extent of it was, he says. Of course, this correlation can’t absolutely prove that sleep was the determining factor in the students’ performance, as opposed to some other influence that might have affected both sleep and grades. But the results are a strong indication, Grossman says, that sleep “really, really matters.”</p> <p>“Of course, we knew already that more sleep would be beneficial to classroom performance, from a number of previous studies that relied on subjective measures like self-report surveys,” Grossman says. “But in this study the benefits of sleep are correlated to performance in the context of a real-life college course, and driven by large amounts of objective data collection.”</p> <p>The study also revealed no improvement in scores for those who made sure to get a good night’s sleep right before a big test. According to the data, “the night before doesn’t matter,” Grossman says. “We've heard the phrase ‘Get a good night’s sleep, you've got a big day tomorrow.’ It turns out this does not correlate at all with test performance. Instead, it’s the sleep you get during the days when learning is happening that matter most.”</p> <p>Another surprising finding is that there appears to be a certain cutoff for bedtimes, such that going to bed later results in poorer performance, even if the total amount of sleep is the same. “When you go to bed matters,” Grossman says. “If you get a certain amount of sleep&nbsp; — let’s say seven hours — no matter when you get that sleep, as long as it’s before certain times, say you go to bed at 10, or at 12, or at 1, your performance is the same. But if you go to bed after 2, your performance starts to go down even if you get the same seven hours. So, quantity isn’t everything.”</p> <p>Quality of sleep also mattered, not just quantity. For example, those who got relatively consistent amounts of sleep each night did better than those who had greater variations from one night to the next, even if they ended up with the same average amount.</p> <p>This research also helped to provide an explanation for something that Grossman says he had noticed and wondered about for years, which is that on average, the women in his class have consistently gotten better grades than the men. Now, he has a possible answer: The data show that the differences in quantity and quality of sleep can fully account for the differences in grades. “If we correct for sleep, men and women do the same in class. So sleep could be the explanation for the gender difference in our class,” he says.</p> <p>More research will be needed to understand the reasons why women tend to have better sleep habits than men. “There are so many factors out there that it could be,” Grossman says. “I can envision a lot of exciting follow-on studies to try to understand this result more deeply.”</p> <p>“The results of this study are very gratifying to me as a sleep researcher, but are terrifying to me as a parent,” says Robert Stickgold, a professor of psychiatry and director of the Center for Sleep and Cognition at Harvard Medical School, who was not connected with this study. He adds, “The overall course grades for students averaging six and a half hours of sleep were down 50 percent from other students who averaged just one hour more sleep. Similarly, those who had just a half-hour more night-to-night variation in their total sleep time had grades that dropped 45 percent below others with less variation. This is huge!”</p> <p>Stickgold says “a full quarter of the variation in grades was explained by these sleep parameters (including bedtime). All students need to not only be aware of these results, but to understand their implication for success in college. I can’t help but believe the same is true for high school students.” But he adds one caution: “That said, correlation is not the same as causation. While I have no doubt that less and more variable sleep will hurt a student’s grades, it’s also possible that doing poorly in classes leads to less and more variable sleep, not the other way around, or that some third factor, such as ADHD, could independently lead to poorer grades and poorer sleep.”</p> <p>The team also included technical assistant Jakub Kaezmarzyk and Harvard Business School researcher Neha Dave. The study was supported by MIT’s Department of Materials Science and Engineering, the Lubin Fund, and the MIT Integrated Learning Initiative.</p> Even relatively small differences in the duration, timing, and consistency of students' sleep may have significant effects on course test results, a new MIT study shows. Research, DMSE, Brain and cognitive sciences, Health, School of Engineering, School of Science, Mental health, McGovern Institute, Students, Student life, education, Education, teaching, academics Technique can image individual proteins within synapses Rapid imaging method could help reveal how conditions such as autism affect brain cells. Thu, 26 Sep 2019 04:59:59 -0400 Anne Trafton | MIT News Office <p>Our brains contain trillions of synapses — the connections that transmit messages from neuron to neuron. Within these synapses are hundreds of different proteins, and dysfunction of these proteins can lead to conditions such as schizophrenia and autism.</p> <p>Researchers at MIT and the Broad Institute of MIT and Harvard have now devised a new way to rapidly image these synaptic proteins at high resolution. Using fluorescent nucleic acid probes, they can label and image an unlimited number of different proteins. They demonstrated the technique in a new study in which they imaged 12 proteins in cellular samples containing thousands of synapses.</p> <p>“Multiplexed imaging is important because there’s so much variability between synapses and cells, even within the same brain,” says Mark Bathe, an MIT associate professor of biological engineering. “You really need to look simultaneously at proteins in the sample to understand what subpopulations of different synapses look like, discover new types of synapses, and understand how genetic variations impact them.”</p> <p>The researchers plan to use this technique next to study what happens to synapses when they block the expression of genes associated with specific diseases, in hopes of developing new treatments that could reverse those effects.</p> <p>Bathe and Jeff Cottrell, director of translational research at the Stanley Center for Psychiatric Research at the Broad Institute, are the senior authors of the study, which appears today in <em>Nature Communications</em>. The lead authors of the paper are former postdocs Syuan-Ming Guo and Remi Veneziano, former graduate student Simon Gordonov, and former research scientist Li Li.</p> <p><strong>Imaging with DNA</strong></p> <p>Synaptic proteins have a variety of functions. Many of them help to form synaptic scaffolds, which are involved in secreting neurotransmitters and processing incoming signals. While synapses contain hundreds of these proteins, conventional fluorescence microscopy is limited to imaging at most four proteins at a time.</p> <p>To boost that number, the MIT team developed a new technique based on an existing method called DNA PAINT. Using this method, originally devised by Ralf Jungmann of the Max Planck Institute of Biochemistry, researchers label proteins or other molecules of interest with a DNA-antibody probe. Then, they image each protein by delivering a fluorescent DNA “oligo” that binds to the DNA-antibody probes.</p> <p>The DNA strands have an inherently low affinity for each other, so they bind and unbind periodically, creating a blinking fluorescence that can be imaged using super-resolution microscopy. However, imaging each protein takes about half an hour, making it impractical for imaging many proteins in a large sample.</p> <p>Bathe and his colleagues set out to create a faster method that would allow them to analyze a huge number of samples in a short period of time. To achieve that, they altered the DNA-dye imaging probe so that it would bind more tightly to the DNA-antibody, using what are called locked nucleic acids. This gives a much brighter signal, so the imaging can be done more quickly, but at slightly lower resolution.</p> <p>“When we do 12 or 15 colors on a single well of neurons, the whole experiment takes an hour, compared with overnight for the super-resolution equivalent,” Bathe says.</p> <p>The researchers used this technique to label 12 different proteins found in the synapse, including scaffolding proteins, proteins associated with the cytoskeleton, and proteins that are known to mark excitatory or inhibitory synapses. One of the proteins they looked at is shank3, a scaffold protein that has been linked to both <a href="">autism</a> and <a href="">schizophrenia</a>.</p> <p>By analyzing protein levels in thousands of neurons, the researchers were able to determine groups of proteins that tend to associate with each other more often than others, and to learn how different synapses vary in the proteins they contain. That kind of information could be used to help classify synapses into subtypes that might help to reveal their functions.</p> <p>“Inhibitory and excitatory are the canonical synapse types, but it is speculated that there are numerous different subtypes of synapses, without any real consensus around what those are,” Bathe says.</p> <p><strong>Understanding disease</strong></p> <p>The researchers also showed that they could measure changes in synaptic protein levels that occur after neurons are treated with a compound called tetrodotoxin (TTX), which strengthens synaptic connections.</p> <p>“Using conventional immunofluorescence, you can typically extract information from&nbsp;three or four targets within the same sample, but with our technique, we were able to expand that number to 12 different&nbsp;targets within the same sample. We applied this method to examine synaptic remodeling that occurs following treatment with TTX, and&nbsp;our finding&nbsp;corroborated previous work that revealed&nbsp;a coordinated upregulation&nbsp;of synaptic proteins following TTX treatment,” says Eric Danielson, an MIT senior postdoc who is an author of the study.</p> <p>The researchers are now using this technique, called PRISM, to study how the structure and composition of synapses are affected by knocking down a set of genes reported previously to confer genetic risk for development of psychiatric disorders. Sequencing the genomes of people with disorders such as autism and schizophrenia has revealed hundreds of disease-linked gene variants, and for most of those variants, scientists have no idea how they contribute to disease.</p> <p>“With this approach, we expect to provide a more detailed overview of the changes in synaptic organization and shared disease effects associated with these genes,” says Karen Perez de Arce, a Broad Institute research scientist and an author of the study.</p> <p>“Understanding how genetic variation impacts neurons’ development in the brain, and their synaptic structure and function, is a huge challenge in neuroscience and in understanding how these diseases arise,” Bathe adds.</p> <p>The research was funded by the National Institutes of Health, including the NIH BRAIN Initiative, the National Science Foundation, the Howard Hughes Medical Institute Simons Faculty Scholars Program, the Open Philanthropy Project, the U.S. Army Research Laboratory, the New York Stem Cell Foundation Robertson Award, and the Stanley Center for Psychiatric Research.</p> <p>Other authors of the paper include MIT research scientist Demian Park, former MIT graduate student Anthony Kulesa, and MIT postdoc Eike-Christian Wamhoff. Paul Blainey, an associate professor of biological engineering and a member of the Broad Institute, and Edward Boyden, the Y. Eva Tan Professor in Neurotechnology and an associate professor of biological engineering and of brain and cognitive sciences, are also authors of the study.</p> Researchers at MIT and the Broad Institute of MIT and Harvard have devised a new way to rapidly image synaptic proteins at high resolution. Using fluorescent nucleic acid probes, they can label and image as many as 12 different proteins in neuronal samples containing thousands of synapses.Syuan-Ming Guo and Li LiResearch, Biological engineering, Broad Institute, School of Engineering, Neuroscience, DNA, Synapses, Imaging, National Institutes of Health (NIH), National Science Foundation (NSF) Josh Tenenbaum receives 2019 MacArthur Fellowship Brain and cognitive sciences professor studies how the human mind is able to learn so rapidly. Wed, 25 Sep 2019 07:00:00 -0400 Anne Trafton | MIT News Office <p>Josh Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences who studies human cognition, has been named a recipient of a 2019 MacArthur Fellowship.</p> <p>The fellowships, often referred to as “genius grants,” come with a five-year, $625,000 prize, which recipients are free to use as they see fit.</p> <p>“It’s an amazing honor, and very unexpected. There are a very small number of cognitive scientists who have ever received it, so it’s an incredible honor to be in their company,” says Tenenbaum, a professor of computational cognitive science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds and Machines (CBMM).</p> <p>Using computer modeling and behavioral experiments, Tenenbaum seeks to understand a key aspect of human intelligence: how people are able to rapidly learn new concepts and tasks based on very little information. This phenomenon is particularly noticeable in babies and young children, who can quickly learn meanings of new words, or how objects behave in the physical world, after minimal exposure to them.</p> <p>“One thing we’re trying to understand is how are these basic ways of understanding the world built, in very young children? What are babies born with? How do children really learn and how can we describe those ideas in engineering terms?” Tenenbaum says.</p> <p>Additionally, his lab explores how the mind performs cognitive processes such as making predictions about future events, inferring the mental states of other people, making judgments regarding cause and effect, and constructing theories about rules that govern physical interactions or social behavior.</p> <p>Tenenbaum says he would like to use the grant money to fund some of the more creative student projects in his lab, which are harder to get funding for, as well as collaborations with MIT colleagues that he sees as key partners in studying various aspects of cognition. He also hopes to use some of the funding to support his department’s efforts to increase research participation of under-represented minority students.</p> <p>Tenenbaum also studies machine learning and artificial intelligence, with the goal of bringing machine-learning algorithms closer to the capacities of human learning. This could lead to more powerful AI systems as well as more powerful theoretical paradigms for understanding human cognition.</p> <p>Tenenbaum received his PhD from MIT in 1999, and after a brief postdoc with the MIT AI Lab he joined the Stanford University faculty as an assistant professor of psychology. He returned to MIT as a faculty member in 2002. Last year, he was named a scientific director of The Core, a part of MIT’s Quest for Intelligence that focuses on advancing the science and engineering of both human and machine intelligence.</p> <p>Including Tenenbaum, 24 MIT faculty members and three staff members have won the MacArthur fellowship.</p> <p>MIT faculty who have won the award over the last decade include health care economist Amy Finkelstein and media studies scholar Lisa Parks (2018); computer scientist Regina Barzilay (2017); economist Heidi Williams (2015); computer scientist Dina Kitabi and astrophysicist Sara Seager (2013); writer Junot Diaz (2012); physicist Nergis Mavalvala (2010); and economist Esther Duflo (2009).</p> Josh TenenbaumImage: Lilly Paquette, MITFaculty, Brain and cognitive sciences, Behavior, Memory, Computer Science and Artificial Intelligence Laboratory (CSAIL), Center for Brains Minds and Machines, School of Science, Awards, honors and fellowships Study finds hub linking movement and motivation in the brain Detailed observations in the lateral septum indicate region processes movement and reward information to help direct behavior. Thu, 19 Sep 2019 12:50:01 -0400 David Orenstein | Picower Institute for Learning and Memory <p>Our everyday lives rely on planned movement through the environment to achieve goals. A new study by MIT neuroscientists at the Picower Institute for Learning and Memory at MIT identifies a well-connected brain region as a crucial link between circuits guiding goal-directed movement and motivated behavior.</p> <p>Published Sept. 19 in <em>Current Biology</em>, the research shows that the lateral septum (LS), a region considered integral to modulating behavior and implicated in many psychiatric disorders, directly encodes information about the speed and acceleration of an animal as it navigates and learns how to obtain a reward in an environment.</p> <p>“Completing a simple task, such as acquiring food for dinner, requires the participation and coordination of a large number of regions of the brain, and the weighing of a number of factors: for example, how much effort is it to get food from the fridge versus a restaurant,” says&nbsp;Hannah Wirtshafter PhD '19, the study’s lead author. “We have discovered that the LS may be aiding you in making some of those decisions. That the LS represents place, movement, and motivational information may enable the LS to help you integrate or optimize performance across considerations of place, speed, and other environmental signals.”</p> <p>Previous research has attributed important behavioral functions to the LS, such as modulating anxiety, aggression, and affect. It is also believed to be involved in addiction, psychosis, depression, and anxiety. Neuroscientists have traced its connections to the hippocampus, a crucial center for encoding spatial memories and associating them with context, and to the ventral tegmental area (VTA), a region that mediates goal-directed behaviors via the neurotransmitter dopamine. But until now, no one had shown that the LS directly tracks movement or communicated with the hippocampus, for instance by synchronizing to certain neural rhythms, about movement and the spatial context of reward.</p> <p>“The hippocampus is one of the most studied regions of the brain due to its involvement in memory, spatial navigation, and a large number of illnesses such as Alzheimer’s disease,” says&nbsp;Wirtshafter, who recently earned her PhD working on the research as a graduate student in the lab of senior author Matthew Wilson, Sherman Fairchild Professor of Neurobiology. “Comparatively little is known about the lateral septum, even though it receives a large amount of information from the hippocampus and is connected to multiple areas involved in motivation and movement.”</p> <p>Wilson says the study helps to illuminate the importance of the LS as a crossroads of movement and motivation information between regions such as the hippocampus and the VTA.</p> <p>“The discovery that activity in the LS is controlled by movement points to a link between movement and dopaminergic control through the LS that that could be relevant to memory, cognition, and disease,” he says.</p> <p><strong>Tracking thoughts</strong></p> <p>Wirtshafter was able to directly observe the interactions between the LS and the hippocampus by simultaneously recording the electrical spiking activity of hundreds of neurons in each region in rats both as they sought a reward in a T-shaped maze, and as they became conditioned to associate light and sound cues with a reward in an open box environment.</p> <p>In that data, she and Wilson observed a speed and acceleration spiking code in the dorsal area of the LS, and saw clear signs that an overlapping population of neurons were processing information based on signals from the hippocampus, including spiking activity locked to hippocampal brain rhythms, location-dependent firing in the T-maze, and cue and reward responses during the conditioning task. Those observations suggested to the researchers that the septum may serve as a point of convergence of information about movement and spatial context.</p> <p>Wirtshafter’s measurements also showed that coordination of LS spiking with the hippocampal theta rhythm is selectively enhanced during choice behavior that relies on spatial working memory, suggesting that the LS may be a key relay of information about choice outcome during navigation.</p> <p><strong>Putting movement in context</strong></p> <p>Overall, the findings suggest that movement-related signaling in the LS, combined with the input that it receives from the hippocampus, may allow the LS to contribute to an animal’s awareness of its own position in space, as well as its ability to evaluate task-relevant changes in context arising from the animal’s movement, such as when it has reached a choice point, Wilson and Wirtshafter said.</p> <p>This also suggests that the reported ability of the LS to modulate affect and behavior may result from its ability to evaluate how internal states change during movement, and the consequences and outcomes of these changes. For instance, the LS may contribute to directing movement toward or away from the location of a positive or negative stimulus.</p> <p>The new study therefore offers new perspectives on the role of the lateral septum in directed behavior, the researchers added, and given the known associations of the LS with some disorders, it may also offer new implications for broader understanding of the mechanisms relating mood, motivation, and movement, and the neuropsychiatric basis of mental illnesses.</p> <p>“Understanding how the LS functions in movement and motivation will aid us in understanding how the brain makes basic decisions, and how disruption in these processed might lead to different disorders,” Wirtshafter says.</p> <p>A National Defense Science and Engineering Graduate Fellowship and the JPB Foundation funded the research.</p> An MIT study is the first to show that a brain region called the lateral septum directly encodes movement information such as speed. Image: Hannah Wirtshafter/Picower Institute for Learning and MemoryPicower Institute, School of Science, Biology, Neuroscience, Behavior, Alumni/ae, Brain and cognitive sciences, Research Perception of musical pitch varies across cultures How people interpret musical notes depends on the types of music they have listened to, researchers find. Thu, 19 Sep 2019 10:59:59 -0400 Anne Trafton | MIT News Office <p>People who are accustomed to listening to Western music, which is based on a system of notes organized in octaves, can usually perceive the similarity between notes that are same but played in different registers — say, high C and middle C. However, a longstanding question is whether this a universal phenomenon or one that has been ingrained by musical exposure.</p> <p>This question has been hard to answer, in part because of the difficulty in finding people who have not been exposed to Western music. Now, a new study led by researchers from MIT and the Max Planck Institute for Empirical Aesthetics has found that unlike residents of the United States, people living in a remote area of the Bolivian rainforest usually do not perceive the similarities between two versions of the same note played at different registers (high or low).</p> <p>The findings suggest that although there is a natural mathematical relationship between the frequencies of every “C,” no matter what octave it’s played in, the brain only becomes attuned to those similarities after hearing music based on octaves, says Josh McDermott, an associate professor in MIT’s Department of Brain and Cognitive Sciences.</p> <p>“It may well be that there is a biological predisposition to favor octave relationships, but it doesn’t seem to be realized unless you are exposed to music in an octave-based system,” says McDermott, who is also a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines.</p> <p>The study also found that members of the Bolivian tribe, known as the Tsimane’, and Westerners do have a very similar upper limit on the frequency of notes that they can accurately distinguish, suggesting that that aspect of pitch perception may be independent of musical experience and biologically determined.</p> <p>McDermott is the senior author of the study, which appears in the journal <em>Current Biology</em> on Sept. 19. Nori Jacoby, a former MIT postdoc who is now a group leader at the Max Planck Institute for Empirical Aesthetics, is the paper’s lead author. Other authors are Eduardo Undurraga, an assistant professor at the Pontifical Catholic University of Chile; Malinda McPherson, a graduate student in the Harvard/MIT Program in Speech and Hearing Bioscience and Technology; Joaquin Valdes, a graduate student at the Pontifical Catholic University of Chile; and Tomas Ossandon, an assistant professor at the Pontifical Catholic University of Chile.</p> <div class="cms-placeholder-content-video"></div> <p><strong>Octaves apart</strong></p> <p>Cross-cultural studies of how music is perceived can shed light on the interplay between biological constraints and cultural influences that shape human perception. McDermott’s lab has performed several such studies with the participation of Tsimane’ tribe members, who live in relative isolation from Western culture and have had little exposure to Western music.</p> <p>In a <a href="">study published in 2016</a>, McDermott and his colleagues found that Westerners and Tsimane’ had different aesthetic reactions to chords, or combinations of notes. To Western ears, the combination of C and F# is very grating, but Tsimane’ listeners rated this chord just as likeable as other chords that Westerners would interpret as more pleasant, such as C and G.</p> <p>Later, Jacoby and McDermott found that both Westerners and Tsimane’ <a href="">are drawn to musical rhythms</a> composed of simple integer ratios, but the ratios they favor are different, based on which rhythms are more common in the music they listen to.</p> <p>In their new study, the researchers studied pitch perception using an experimental design in which they play a very simple tune, only two or three notes, and then ask the listener to sing it back. The notes that were played could come from any octave within the range of human hearing, but listeners sang their responses within their vocal range, usually restricted to a single octave.</p> <p>Western listeners, especially those who were trained musicians, tended to reproduce the tune an exact number of octaves above or below what they heard, though they were not specifically instructed to do so. In Western music, the pitch of the same note doubles with each ascending octave, so tones with frequencies of 27.5 hertz, 55 hertz, 110 hertz, 220 hertz, and so on, are all heard as the note A.</p> <p>Western listeners in the study, all of whom lived in New York or Boston, accurately reproduced sequences such as A-C-A, but in a different register, as though they hear the similarity of notes separated by octaves. However, the Tsimane’ did not.</p> <p>“The relative pitch was preserved (between notes in the series), but the absolute pitch produced by the Tsimane’ didn’t have any relationship to the absolute pitch of the stimulus,” Jacoby says. “That’s consistent with the idea that perceptual similarity is something that we acquire from exposure to Western music, where the octave is structurally very important.”</p> <p>The ability to reproduce the same note in different octaves may be honed by singing along with others whose natural registers are different, or singing along with an instrument being played in a different pitch range, Jacoby says.</p> <p><strong>Limits of perception</strong></p> <p>The study findings also shed light on the upper limits of pitch perception for humans. It has been known for a long time that Western listeners cannot accurately distinguish pitches above about 4,000 hertz, although they can still hear frequencies up to nearly 20,000 hertz. In a traditional 88-key piano, the highest note is about 4,100 hertz.</p> <p>People have speculated that the piano was designed to go only that high because of a fundamental limit on pitch perception, but McDermott thought it could be possible that the opposite was true: That is, the limit was culturally influenced by the fact that few musical instruments produce frequencies higher than 4,000 hertz.</p> <p>The researchers found that although Tsimane’ musical instruments usually have upper limits much lower than 4,000 hertz, Tsimane’ listeners could distinguish pitches very well up to about 4,000 hertz, as evidenced by accurate sung reproductions of those pitch intervals. Above that threshold, their perceptions broke down, very similarly to Western listeners.</p> <p>“It looks almost exactly the same across groups, so we have some evidence for biological constraints on the limits of pitch,” Jacoby says.</p> <p>One possible explanation for this limit is that once frequencies reach about 4,000 hertz, the firing rates of the neurons of our inner ear can’t keep up and we lose a critical cue with which to distinguish different frequencies.</p> <p>“The new study contributes to the age-long debate about the interplays between culture and biological constraints in music,” says Daniel Pressnitzer, a senior research scientist at Paris Descartes University, who was not involved in the research. “This unique, precious, and extensive dataset demonstrates both striking similarities and unexpected differences in how Tsimane’ and Western listeners perceive or conceive musical pitch.”</p> <p>Jacoby and McDermott now hope to expand their cross-cultural studies to other groups who have had little exposure to Western music, and to perform more detailed studies of pitch perception among the Tsimane’.</p> <p>Such studies have already shown the value of including research participants other than the Western-educated, relatively wealthy college undergraduates who are the subjects of most academic studies on perception, McDermott says. These broader studies allow researchers to tease out different elements of perception that cannot be seen when examining only a single, homogenous group.</p> <p>“We’re finding that there are some cross-cultural similarities, but there also seems to be really striking variation in things that a lot of people would have presumed would be common across cultures and listeners,” McDermott says. “These differences in experience can lead to dissociations of different aspects of perception, giving you clues to what the parts of the perceptual system are.”</p> <p>The research was funded by the James S. McDonnell Foundation, the National Institutes of Health, and the Presidential Scholar in Society and Neuroscience Program at Columbia University.</p> Eduardo Undurraga, an assistant professor at the Pontifical Catholic University of Chile, runs a musical pitch perception experiment with a member of the Tsimane’ tribe of the Bolivian rainforest.Image: Josh McDermottResearch, Brain and cognitive sciences, Music, Behavior, McGovern Institute, School of Science, National Institutes of Health (NIH), Neuroscience, Center for Brains Minds and Machines Mehrdad Jazayeri and Hazel Sive awarded 2019 School of Science teaching prizes Nominated by peers and students, professors in brain and cognitive sciences and biology are recognized for excellence in graduate and undergraduate education. Wed, 18 Sep 2019 14:30:02 -0400 School of Science <p>The School of Science has announced that the recipients of the school’s 2019 Teaching Prizes for Graduate and Undergraduate Education are Mehrdad Jazayeri and Hazel Sive. Nominated by peers and students, the faculty members chosen to receive these prizes are selected to acknowledge their exemplary efforts in teaching graduate and undergraduate students.</p> <p><a href="">Mehrdad Jazayeri</a>, an associate professor in the Department of Brain and Cognitive Sciences and investigator at the McGovern Institute for Brain Research, is awarded the prize for graduate education for 9.014 (Quantitative Methods and Computational Models in Neuroscience). Earlier this year, he was recognized for excellence in graduate teaching by the Department of Brain and Cognitive Sciences and won a Graduate Student Council teaching award in 2016. In their nomination letters, peers and students alike remarked that he displays not only great knowledge, but extraordinary skill in teaching, most notably by ensuring everyone learns the material. Jazayeri does so by considering students’ diverse backgrounds and contextualizing subject material to relatable applications in various fields of science according to students’ interests. He also improves and adjusts the course content, pace, and intensity in response to student input via surveys administered throughout the semester.</p> <p><a href="">Hazel Sive</a>, a professor in the Department of Biology, member of the Whitehead Institute for Biomedical Research, and associate member of the Broad Institute of MIT and Harvard, is awarded the prize for undergraduate education. A MacVicar Faculty Fellow, she has been recognized with MIT’s highest undergraduate teaching award in the past, as well as the 2003 School of Science Teaching Prize for Graduate Education. Exemplified by her nominations, Sive’s laudable teaching career at MIT continues to receive praise from undergraduate students who take her classes. In recent post-course evaluations, students commended her exemplary and dedicated efforts to her field and to their education.</p> <p>The School of Science welcomes nominations for the teaching prize in the spring semester of each academic year. Nominations can be submitted at the&nbsp;<a href="">school's website</a>.</p> Mehrdad Jazayeri, an associate professor in the Department of Brain and Cognitive Sciences (left), and Hazel Sive, a professor in the Department of Biology, are the 2019 recipients of the School of Science Teaching Prizes in Graduate and Undergraduate Education, respectively.School of Science, Brain and cognitive sciences, McGovern Institute, Biology, Whitehead Institute, Broad Institute, MacVicar fellows, Faculty, Awards, honors and fellowships, Education, teaching, academics Detecting patients’ pain levels via their brain signals System could help with diagnosing and treating noncommunicative patients. Thu, 12 Sep 2019 00:00:00 -0400 Rob Matheson | MIT News Office <p>Researchers from MIT and elsewhere have developed a system that measures a patient’s pain level by analyzing brain activity from a portable neuroimaging device. The system could help doctors diagnose and treat pain in unconscious and noncommunicative patients, which could reduce the risk of chronic pain that can occur after surgery.</p> <p>Pain management is a surprisingly challenging, complex balancing act. Overtreating pain, for example, runs the risk of addicting patients to pain medication. Undertreating pain, on the other hand, may lead to long-term chronic pain and other complications. Today, doctors generally gauge pain levels according to their patients’ own reports of how they’re feeling. But what about patients who can’t communicate how they’re feeling effectively — or at all — such as children, elderly patients with dementia, or those undergoing surgery?</p> <p>In a paper presented at the International Conference on Affective Computing and Intelligent Interaction, the researchers describe a method to quantify pain in patients. To do so, they leverage an emerging neuroimaging technique called functional near infrared spectroscopy (fNIRS), in which sensors placed around the head measure oxygenated hemoglobin concentrations that indicate neuron activity.</p> <p>For their work, the researchers use only a few fNIRS sensors on a patient’s forehead to measure activity in the prefrontal cortex, which plays a major role in pain processing. Using the measured brain signals, the researchers developed personalized machine-learning models to detect patterns of oxygenated hemoglobin levels associated with pain responses. When the sensors are in place, the models can detect whether a patient is experiencing pain&nbsp;with around 87 percent accuracy.</p> <p>“The way we measure pain hasn’t changed over the years,” says Daniel Lopez-Martinez, a PhD student in the Harvard-MIT Program in Health Sciences and Technology and a researcher at the MIT Media Lab. “If we don’t have metrics for how much pain someone experiences, treating pain and running clinical trials becomes challenging. The motivation is to quantify pain in an objective manner that doesn’t require the cooperation of the patient, such as when a patient is unconscious during surgery.”</p> <p>Traditionally, surgery patients receive anesthesia and medication based on their age, weight, previous diseases, and other factors. If they don’t move and their heart rate remains stable, they’re considered fine. But the brain may still be processing pain signals while they’re unconscious, which can lead to increased postoperative pain and long-term chronic pain. The researchers’ system could provide surgeons with real-time information about an unconscious patient’s pain levels, so they can adjust anesthesia and medication dosages accordingly to stop those pain signals.</p> <p>Joining Lopez-Martinez on the paper are: Ke Peng of Harvard Medical School, Boston Children’s Hospital, and the CHUM Research Centre in Montreal; Arielle Lee and David Borsook, both of Harvard Medical School, Boston Children’s Hospital, and Massachusetts General Hospital; and Rosalind Picard, a professor of media arts and sciences and director of affective computing research in the Media Lab.</p> <p><strong>Focusing on the forehead</strong></p> <p>In their work, the researchers adapted the fNIRS system and developed new machine-learning techniques to make the system more accurate and practical for clinical use.</p> <p>To use fNIRS, sensors are traditionally placed all around a patient’s head. Different wavelengths of near-infrared light shine through the skull and into the brain. Oxygenated and deoxygenated hemoglobin absorb the wavelengths differently, altering their signals slightly. When the infrared signals reflect back to the sensors, signal-processing techniques use the altered signals to calculate how much of each hemoglobin type is present in different regions of the brain.</p> <p>When a patient is hurt, regions of the brain associated with pain will see a sharp rise in oxygenated hemoglobin and decreases in deoxygenated hemoglobin, and these changes can be detected through fNIRS monitoring. But traditional fNIRS systems place sensors all around the patient’s head. This can take a long time to set up, and it can be difficult for patients who must lie down. It also isn’t really feasible for patients undergoing surgery.</p> <p>Therefore, the researchers adapted the fNIRS system to specifically measure signals only from the prefrontal cortex. While pain processing involves outputs of information from multiple regions of the brain, studies have shown the prefrontal cortex integrates all that information. This means they need to place sensors only over the forehead.</p> <p><br /> Another problem with traditional fNIRS systems is they capture some signals from the skull and skin that contribute to noise. To fix that, the researchers installed additional sensors &nbsp;to capture and filter out those signals.</p> <p><strong>Personalized pain modeling</strong></p> <p>On the machine-learning side, the researchers trained and tested a model on a labeled pain-processing dataset they collected from 43 male participants. (Next they plan to collect a lot more data from diverse patient populations, including female patients — both during surgery and while conscious, and at a range of pain intensities — in order to better evaluate the accuracy of the system.)</p> <p>Each participant wore the researchers’ fNIRS device and was randomly exposed to an innocuous sensation and then about a dozen shocks to their thumb at two different pain intensities, measured on a scale of 1-10: a low level (about a 3/10) or high level (about 7/10). Those two intensities were determined with pretests: The participants self-reported the low level as being only strongly aware of the shock without pain, and the high level as the maximum pain they could tolerate.</p> <p><br /> In training, the model extracted dozens of features from the signals related to how much oxygenated and deoxygenated hemoglobin was present, as well as how quickly the oxygenated hemoglobin levels rose. Those two metrics — quantity and speed — give a clearer picture of a patient’s experience of pain at the different intensities.</p> <p>Importantly, the model also automatically generates “personalized” submodels that extract high-resolution features from individual patient subpopulations. Traditionally, in machine learning, one model learns classifications — “pain” or “no pain” — based on average responses of the entire patient population. But that generalized approach can reduce accuracy, especially with diverse patient populations.</p> <p>The researchers’ model instead trains on the entire population but simultaneously identifies shared characteristics among subpopulations within the larger dataset. For example, pain responses to the two intensities may differ between young and old patients, or depending on gender. This generates learned submodels that break off and learn, in parallel, patterns of their patient subpopulations. At the same time, however, they’re all still sharing information and learning patterns shared across the entire population. In short, they’re simultaneously leveraging fine-grained personalized information and population-level information to train better.</p> <p>The personalized models and a traditional model were evaluated in classifying pain or no-pain in a random hold-out set of participant brain signals from the dataset, where the self-reported pain scores were known for each participant. The personalized models outperformed the traditional model by about 20 percent, reaching about 87 percent accuracy.</p> <p>“Because we are able to detect pain with this high accuracy, using only a few sensors on the forehead, we have a solid basis for bringing this technology to a real-world clinical setting,” Lopez-Martinez says.</p> Researchers from MIT and elsewhere have developed a system that detects pain in patients by analyzing brain activity from a wearable neuroimaging device, which could help doctors diagnose and treat pain in unconscious and noncommunicative patients. Courtesy of the researchers, edited by MIT NewsResearch, Computer science and technology, Algorithms, Artificial intelligence, Machine learning, Behavior, Health, Health care, Health sciences and technology, Drug development, Neuroscience, Media Lab, Harvard-MIT Program in Health Sciences and Technology, School of Architecture and Planning Hearing through the clatter Study reveals brain regions that respond differently to the presence of background noise, suggesting the brain progressively hones in on and isolates sounds. Mon, 09 Sep 2019 13:10:01 -0400 Sabbi Lall | McGovern Institute for Brain Research <p>In a busy coffee shop, our eardrums are inundated with sound waves — people chatting, the clatter of cups, music playing — yet our brains somehow manage to untangle relevant sounds, like a barista announcing that our “coffee is ready,” from insignificant noise. A new McGovern Institute for Brain Research study sheds light on how the brain accomplishes the task of extracting meaningful sounds from background noise — findings that could one day help to build artificial hearing systems and aid development of targeted hearing prosthetics.</p> <p>“These findings reveal a neural correlate of our ability to listen in noise, and at the same time demonstrate functional differentiation between different stages of auditory processing in the cortex,” explains <a href="">Josh McDermott</a>, an associate professor of brain and cognitive sciences at MIT, a member of the McGovern Institute and the Center for Brains, Minds and Machines, and the senior author of the study.</p> <p>The auditory cortex, a part of the brain that responds to sound, has long been known to have distinct anatomical subregions, but the role these areas play in auditory processing has remained a mystery. In their <a href="" target="_blank">study</a> published in <em>Nature Communications</em>, McDermott and former graduate student Alex Kell discovered that these subregions respond differently to the presence of background noise, suggesting that auditory processing occurs in steps that progressively hone in on and isolate a sound of interest.</p> <p><strong>Background check</strong></p> <p>Previous studies have shown that the primary and non-primary subregions of the auditory cortex respond to sound with different dynamics, but these studies were largely based on brain activity in response to speech or simple synthetic sounds (such as tones and clicks). Little was known about how these regions might work to subserve everyday auditory behavior.</p> <p>To test these subregions under more realistic conditions, McDermott and Kell, who is now a postdoctoral researcher at Columbia University, assessed changes in human brain activity while subjects listened to natural sounds with and without background noise.</p> <p>While lying in an MRI scanner, subjects listened to 30 different natural sounds, ranging from meowing cats to ringing phones, that were presented alone or embedded in real-world background noise, such as heavy rain.</p> <p>“When I started studying audition,” explains Kell, “I started just sitting around in my day-to-day life, just listening, and was astonished at the constant background noise that seemed to usually be filtered out by default. Most of these noises tended to be pretty stable over time, suggesting we could experimentally separate them. The project flowed from there.”</p> <p>To their surprise, Kell and McDermott found that the primary and non-primary regions of the auditory cortex responded differently to natural sound depending upon whether background noise was present.</p> <p>They found that activity of the primary auditory cortex was altered when background noise is present, suggesting that this region has not yet differentiated between meaningful sounds and background noise. Non-primary regions, however, respond similarly to natural sounds irrespective of whether noise is present, suggesting that cortical signals generated by sound are transformed or “cleaned up” to remove background noise by the time they reach the non-primary auditory cortex.</p> <p>“We were surprised by how big the difference was between primary and non-primary areas,” explains Kell, “so we ran a bunch more subjects, but kept seeing the same thing. We had a ton of questions about what might be responsible for this difference, and that’s why we ended up running all these follow-up experiments.”</p> <p><strong>A general principle</strong></p> <p>Kell and McDermott went on to test whether these responses were specific to particular sounds, and discovered that the above effect remained stable no matter the source or type of sound activity. Music, speech, or a squeaky toy all activated the non-primary cortex region similarly, whether or not background noise was present.</p> <p>The authors also tested whether attention is relevant. Even when the researchers sneakily distracted subjects with a visual task in the scanner, the cortical subregions responded to meaningful sound and background noise in the same way, showing that attention is not driving this aspect of sound processing. In other words, even when we are focused on reading a book, our brain is diligently sorting the sound of our meowing cat from the patter of heavy rain outside.</p> <p><strong>Future directions</strong></p> <p>The McDermott lab is now building computational models of the so-called “noise robustness” found in the <em>Nature Communications</em> study, and Kell is pursuing a finer-grained understanding of sound processing in his postdoctoral work at Columbia by exploring the neural circuit mechanisms underlying this phenomenon.</p> <p>By gaining a deeper understanding of how the brain processes sound, the researchers hope their work will contribute to improve diagnoses and treatment of hearing dysfunction. Such research could help to reveal the origins of listening difficulties that accompany developmental disorders or age-related hearing loss. For instance, if hearing loss results from dysfunction in sensory processing, this could be caused by abnormal noise robustness in the auditory cortex. Normal noise robustness might instead suggest that there are impairments elsewhere in the brain — for example, a breakdown in higher executive function.</p> <p>“In the future," McDermott says, "we hope these noninvasive measures of auditory function may become valuable tools for clinical assessment.”</p> The primary region of the human auditory cortex (outlined in white) responds differently (blue) to natural sound when background noise is present.Image: Alex KellMcGovern Institute, Brain and cognitive sciences, School of Science, Research, Neuroscience, Center for Brains Minds and Machines Robotic thread is designed to slip through the brain’s blood vessels Magnetically controlled device could deliver clot-reducing therapies in response to stroke or other brain blockages. Wed, 28 Aug 2019 14:00:00 -0400 Jennifer Chu | MIT News Office <p>MIT engineers have developed a magnetically steerable, thread-like robot that can actively glide through narrow, winding pathways, such as the labrynthine vasculature of the brain.</p> <p>In the future, this robotic thread may be paired with existing endovascular technologies, enabling doctors to remotely guide the robot through a patient’s brain vessels to quickly treat blockages and lesions, such as those that occur in aneurysms and stroke.</p> <p>“Stroke is the number five cause of death and a leading cause of disability in the United States. If acute stroke can be treated within the first 90 minutes or so, patients’ survival rates could increase significantly,” says Xuanhe Zhao, associate professor of mechanical engineering and of civil and environmental engineering at MIT. “If we could design a device to reverse blood vessel blockage within this ‘golden hour,’ we could potentially avoid permanent brain damage. That’s our hope.”</p> <p>Zhao and his team, including lead author Yoonho Kim, a graduate student in MIT’s Department of Mechanical Engineering, describe their soft robotic design today in the journal <em>Science Robotics. </em>The paper’s other co-authors are MIT graduate student German Alberto Parada and visiting student Shengduo Liu.</p> <p><strong>In a tight spot</strong></p> <p>To clear blood clots in the brain, doctors often perform an endovascular procedure, a minimally invasive surgery in which a surgeon inserts a thin wire through a patient’s main artery, usually in the leg or groin. Guided by a fluoroscope that simultaneously images the blood vessels using X-rays, the surgeon then manually rotates the wire up into the damaged brain vessel. A catheter can then be threaded up along the wire to deliver drugs or clot-retrieval devices to the affected region.</p> <p>Kim says the procedure can be physically taxing, requiring surgeons, who must be specifically trained in the task, to endure repeated radiation exposure from fluoroscopy.</p> <p>“It’s a demanding skill, and there are simply not enough surgeons for the patients, especially in suburban or rural areas,” Kim says.</p> <p>The medical guidewires used in such procedures are passive, meaning they must be manipulated manually, and are typically made from a core of metallic alloys, coated in polymer, a material that Kim says could potentially generate friction and damage vessel linings if the wire were to get temporarily stuck in a particularly tight space.</p> <p>The team realized that developments in their lab could help improve such endovascular procedures, both in the design of the guidewire and in reducing doctors’ exposure to any associated radiation.</p> <div class="cms-placeholder-content-video"></div> <p><strong>Threading a needle</strong></p> <p>Over the past few years, the team has built up expertise in both <a href="">hydrogels</a> — biocompatible materials made mostly of water — and 3-D-printed <a href="">magnetically-actuated materials</a> that can be designed to crawl, jump, and even catch a ball, simply by following the direction of a magnet.</p> <p>In this new paper, the researchers combined their work in hydrogels and in magnetic actuation, to produce a magnetically steerable, hydrogel-coated robotic thread, or guidewire, which they were able to make thin enough to magnetically guide through a life-size silicone replica of the brain’s blood vessels.</p> <p>The core of the robotic thread is made from nickel-titanium alloy, or “nitinol,” a material that is both bendy and springy. Unlike a clothes hanger, which would retain its shape when bent, a nitinol wire would return to its original shape, giving it more flexibility in winding through tight, tortuous vessels. The team coated the wire’s core in a rubbery paste, or ink, which they embedded throughout with magnetic particles.</p> <p>Finally, they used a chemical process they developed previously, to coat and bond the magnetic covering with hydrogel — a material that does not affect the responsiveness of the underlying magnetic particles and yet provides the wire with a smooth, friction-free, biocompatible surface.</p> <p>They demonstrated the robotic thread’s precision and activation by using a large magnet, much like the strings of a marionette, to steer the thread through an obstacle course of small rings, reminiscent of a thread working its way through the eye of a needle.</p> <p>The researchers also tested the thread in a life-size silicone replica of the brain’s major blood vessels, including clots and aneurysms, modeled after the CT scans of an actual patient’s brain. The team filled the silicone vessels with a liquid simulating the viscosity of blood, then manually manipulated a large magnet around the model to steer the robot through the vessels’ winding, narrow paths.</p> <p>Kim says the robotic thread can be functionalized, meaning that features can be added — for example, to deliver clot-reducing drugs or break up blockages with laser light. To demonstrate the latter, the team replaced the thread’s nitinol core with an optical fiber and found that they could magnetically steer the robot and activate the laser once the robot reached a target region.</p> <p>When the researchers ran comparisons between the robotic thread coated versus uncoated with hydrogel, they found that the hydrogel gave the thread a much-needed, slippery advantage, allowing it to glide through tighter spaces without getting stuck. In an endovascular surgery, this property would be key to preventing friction and injury to vessel linings as the thread works its way through.</p> <p>“One of the challenges in surgery has been to be able to navigate through complicated blood vessels in the brain, which has a very small diameter, where commercial catheters can’t reach,” says Kyujin Cho, professor of mechanical engineering at Seoul National University. “This research has shown potential to overcome this challenge and enable surgical procedures in the brain without open surgery.”</p> <p>And just how can this new robotic thread keep surgeons radiation-free? Kim says that a magnetically steerable guidewire does away with the necessity for surgeons to physically push a wire through a patient’s blood vessels. This means that doctors also wouldn’t have to be in close proximity to a patient, and more importantly, the radiation-generating fluoroscope.</p> <p>In the near future, he envisions endovascular surgeries that incorporate existing magnetic technologies, such as pairs of large magnets, the directions of which doctors can manipulate from just outside the operating room, away from the fluoroscope imaging the patient’s brain, or even in an entirely different location.</p> <p>“Existing platforms could apply magnetic field and do the fluoroscopy procedure at the same time to the patient, and the doctor could be in the other room, or even in a different city, controlling the magnetic field with a joystick,” Kim says. “Our hope is to leverage existing technologies to test our robotic thread in vivo in the next step.”</p> <p>This research was funded, in part, by the Office of Naval Research, the MIT Institute for Soldier Nanotechnologies, and the National Science Foundation (NSF).</p> MIT engineers have developed robotic thread (in black) that can be steered magnetically and is small enough to work through narrow spaces such as the vasculature of the human brain. The researchers envision the technology may be used in the future to clear blockages in patients with stroke and aneurysms.Image courtesy of the researchers Bioengineering and biotechnology, Brain and cognitive sciences, Civil and environmental engineering, Drug delivery, Mechanical engineering, Research, School of Engineering, National Science Foundation (NSF) IBM gives artificial intelligence computing at MIT a lift Nearly $12 million machine will let MIT researchers run more ambitious AI models. Mon, 26 Aug 2019 16:55:01 -0400 Kim Martineau | MIT Quest for Intelligence <p>IBM designed Summit, the fastest supercomputer on Earth, to run the calculation-intensive models that power modern artificial intelligence (AI). Now MIT is about to get a slice.&nbsp;</p> <p>IBM pledged earlier this year to donate an $11.6 million computer cluster to MIT modeled after the architecture of Summit, the supercomputer it built at Oak Ridge National Laboratory for the U.S. Department of Energy. The donated cluster is expected to come online this fall when the&nbsp;<a href="">MIT Stephen A. Schwarzman College of Computing</a>&nbsp;opens its doors, allowing researchers to run more elaborate AI models to tackle a range of problems, from developing a better hearing aid to designing a longer-lived lithium-ion battery.&nbsp;</p> <p>“We’re excited to see a range of AI projects at MIT get a computing boost, and we can’t wait to see what magic awaits,” says&nbsp;<a href="">John E. Kelly III</a>, executive vice president of IBM, who announced the gift in February at MIT’s&nbsp;<a href="">launch celebration</a>&nbsp;of the MIT Schwarzman College of Computing.&nbsp;&nbsp;</p> <p>IBM has named the cluster <a href="" target="_blank">Satori</a>, a Zen Buddhism term for “sudden enlightenment.” Physically the size of a shipping container,&nbsp;Satori is intellectually closer to a Ferrari, capable of zipping through 2 quadrillion calculations per second. That’s the&nbsp;equivalent of each person on Earth performing more than&nbsp;10 million multiplication problems each second for an entire year, making Satori nimble enough to&nbsp;join the middle ranks of the world’s&nbsp;<a href="">500 fastest</a>&nbsp;computers.</p> <p>Rapid progress in AI has fueled a relentless demand for computing power to train more elaborate models on ever-larger datasets. At the same time, federal funding for academic computing facilities has been on a three-decade decline.&nbsp;<a href="">Christopher Hill</a>, director of MIT’s Research Computing Project, puts the current demand at MIT at five times&nbsp;what the Institute can offer.&nbsp;&nbsp;</p> <p>“IBM’s gift couldn’t come at a better time,” says&nbsp;<a href="">Maria Zuber</a>, a geophysics professor and MIT’s vice president of research. “The opening of the new college will only increase demand for computing power. Satori will go a long way in helping to ease the crunch.”</p> <p>The computing gap was immediately apparent to&nbsp;<a href="">John Cohn</a>, chief scientist at the&nbsp;<a href="">MIT-IBM Watson AI Lab</a>, when the lab opened last year. “The cloud alone wasn’t giving us all that we needed for challenging AI training tasks,” he says. “The expense and long run times made us ask, could we bring more compute power here, to MIT?”</p> <p>It’s a mission Satori was built to fill, with IBM Power9 processors, a fast internal network, a large memory, and 256 graphics processing units (GPUs). Designed to rapidly process video-game images, graphics processors have become the workhorse for modern AI applications. Satori, like Summit, has been configured to wring as much power from each GPU as possible.</p> <p>IBM’s gift follows a history of collaborations with MIT that have paved the way for computing breakthroughs. In 1956, IBM helped launch the MIT Computation Center with the donation of an IBM 704, the first mass-produced computer to handle complex math. Nearly three decades later, IBM helped fund&nbsp;<a href="">Project Athena</a>, an initiative that brought networked computing to campus. Together, these initiatives spawned time-share operating systems, foundational programming languages, instant messaging, and the network-security protocol, Kerberos, among other technologies.&nbsp;</p> <p>More recently, IBM&nbsp;<a href="">agreed to invest</a>&nbsp;$240 million over 10 years to establish the MIT-IBM Watson AI Lab, a founding sponsor of MIT’s&nbsp;<a href="">Quest for Intelligence</a>. In addition to filling the computing gap at MIT, Satori will be configured to allow researchers to exchange data with all major commercial cloud providers, as well as prepare their code to run on IBM’s Summit supercomputer.</p> <p><a href="">Josh McDermott</a>, an associate professor at MIT’s&nbsp;<a href="">Department of Brain and Cognitive Sciences</a>, is currently using Summit to develop a better hearing aid, but before he and his students could run their models, they spent countless hours getting the code ready. In the future, Satori will expedite the process, he says, and in the longer term, make more ambitious projects possible.</p> <p>“We’re currently building computer systems to model one sensory system but we’d like to be able to build models that can see, hear and touch,” he says. “That requires a much bigger scale.”</p> <p><a href="">Richard Braatz</a>, the Edwin R. Gilliland Professor at MIT’s&nbsp;<a href="">Department of Chemical Engineering</a>, is using AI to improve&nbsp;lithium-ion battery technologies. He and his colleagues recently developed a machine learning algorithm to predict a battery’s lifespan from past charging cycles, and now, they’re developing multiscale simulations to test new materials and designs for extending battery life<strong>.&nbsp;</strong>With a boost from a computer like Satori, the simulations could capture key physical and chemical processes that accelerate discovery. “With better predictions, we can bring new ideas to market faster,” he says.&nbsp;</p> <p>Satori will be housed at a silk mill-turned data center, the&nbsp;<a href="">Massachusetts Green High Performance Computing Center</a> (MGHPCC) in Holyoke, Massachusetts, and connect to MIT via dedicated, high-speed fiber optic cables.&nbsp;At 150 kilowatts, Satori will consume as much energy as a mid-sized building at MIT, but its carbon footprint will be nearly fully offset by the use of hydro and nuclear power at the Holyoke facility.&nbsp;Equipped with&nbsp;energy-efficient cooling, lighting, and power distribution, the MGHPCC was the first academic data center to receive&nbsp;LEED-platinum status, the highest green-building award, in 2011.</p> <p>“Siting Satori at Holyoke minimizes its carbon emissions and environmental impact without compromising its scientific impact,” says John Goodhue, executive director of the MGHPCC.</p> <p>Visit the <a href="">Satori website</a> for more information.</p> An $11.6 million artificial intelligence computing cluster donated by IBM to MIT will come online this fall at the Massachusetts Green High Performance Computing Center (MGHPCC) in Holyoke, Massachusetts.Photo: Helen Hill/MGHPCCQuest for Intelligence, MIT-IBM Watson AI Lab, Brain and cognitive sciences, Chemical engineering, School of Science, Computer Science and Artificial Intelligence Laboratory (CSAIL), Electrical engineering and computer science (EECS), School of Engineering, Artificial intelligence, Algorithms, Computer modeling, Computer science and technology, Machine learning, Supercomputing, MIT Schwarzman College of Computing Two studies reveal benefits of mindfulness for middle school students Focusing awareness on the present moment can enhance academic performance and lower stress levels. Mon, 26 Aug 2019 14:00:00 -0400 Anne Trafton | MIT News Office <p>Two new studies from MIT suggest that mindfulness — the practice of focusing one’s awareness on the present moment — can enhance academic performance and mental health in middle schoolers. The researchers found that more mindfulness correlates with better academic performance, fewer suspensions from school, and less stress.</p> <p>“By definition, mindfulness is the ability to focus attention on the present moment, as opposed to being distracted by external things or internal thoughts. If you’re focused on the teacher in front of you, or the homework in front of you, that should be good for learning,” says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.</p> <p>The researchers also showed, for the first time, that mindfulness training can alter brain activity in students. Sixth-graders who received mindfulness training not only reported feeling less stressed, but their brain scans revealed reduced activation of the amygdala, a brain region that processes fear and other emotions, when they viewed images of fearful faces.</p> <p>Together, the findings suggest that offering mindfulness training in schools could benefit many students, says Gabrieli, who is the senior author of both studies.&nbsp;</p> <p>“We think there is a reasonable possibility that mindfulness training would be beneficial for children as part of the daily curriculum in their classroom,” he says. “What’s also appealing about mindfulness is that there are pretty well-established ways of teaching it.”</p> <p><strong>In the moment</strong></p> <p>Both studies were performed at charter schools in Boston. In one of the papers, which appears today in the journal <em>Behavioral Neuroscience</em>, the MIT team studied about 100 sixth-graders. Half of the students received mindfulness training every day for eight weeks, while the other half took a coding class. The mindfulness curriculum, created by the nonprofit program <a href="">Calmer Choice</a>,&nbsp;was&nbsp;designed to encourage students to pay attention to their breath, and to focus on the present moment rather than thoughts of the past or the future.</p> <p>Students who received the mindfulness training reported that their stress levels went down after the training, while the students in the control group did not. Students in the mindfulness training group also reported fewer negative feelings, such as sadness or anger, after the training.</p> <p>About 40 of the students also participated in brain imaging studies before and after the training. The researchers measured activity in the amygdala as the students looked at pictures of faces expressing different emotions.</p> <p>At the beginning of the study, before any training, students who reported higher stress levels showed more amygdala activity when they saw fearful faces. This is consistent with previous research showing that the amygdala can be overactive in people who experience more stress, leading them to have stronger negative reactions to adverse events.</p> <p>“There’s a lot of evidence that an overly strong amygdala response to negative things is associated with high stress in early childhood and risk for depression,” Gabrieli says.</p> <p>After the mindfulness training, students showed a smaller amygdala response when they saw the fearful faces, consistent with their reports that they felt less stressed. This suggests that mindfulness training could potentially help prevent or mitigate mood disorders linked with higher stress levels, the researchers say.</p> <p>Richard Davidson, a professor of psychology and psychiatry at the University of Wisconsin, says that the findings suggest there could be great benefit to implementing mindfulness training in middle schools.</p> <p>“This is really one of the very first rigorous studies with children of that age to demonstrate behavioral and neural benefits of a simple mindfulness training,” says Davidson, who was not involved in the study.</p> <p><strong>Evaluating mindfulness</strong></p> <p>In the other paper, which appeared in the journal <em>Mind, Brain, and Education</em> in June, the researchers did not perform any mindfulness training but used a questionnaire to evaluate mindfulness in more than 2,000 students in grades 5-8. The questionnaire was based on the Mindfulness Attention Awareness Scale, which is often used in mindfulness studies on adults. Participants are asked to rate how strongly they agree with statements such as “I rush through activities without being really attentive to them.”</p> <p>The researchers compared the questionnaire results with students’ grades, their scores on statewide standardized tests, their attendance rates, and the number of times they had been suspended from school. Students who showed more mindfulness tended to have better grades and test scores, as well as fewer absences and suspensions.</p> <p>“People had not asked that question in any quantitative sense at all, as to whether a more mindful child is more likely to fare better in school,” Gabrieli says. “This is the first paper that says there is a relationship between the two.”</p> <p>The researchers now plan to do a full school-year study, with a larger group of students across many schools, to examine the longer-term effects of mindfulness training. Shorter programs like the two-month training used in the <em>Behavioral Neuroscience</em> study would most likely not have a lasting impact, Gabrieli says.</p> <p>“Mindfulness is like going to the gym. If you go for a month, that’s good, but if you stop going, the effects won’t last,” he says. “It’s a form of mental exercise that needs to be sustained.”</p> <p>The research was funded by the Walton Family Foundation, the Poitras Center for Psychiatric Disorders Research at the McGovern Institute for Brain Research, and the National Council of Science and Technology of Mexico. Camila Caballero ’13, now a graduate student at Yale University, is the lead author of the <em>Mind, Brain, and Education</em> study. Caballero and MIT postdoc Clemens Bauer are lead authors of the <em>Behavioral Neuroscience</em> study. Additional collaborators were from the Harvard Graduate School of Education, Transforming Education, Boston Collegiate Charter School, and Calmer Choice.</p> An MIT study suggests that mindfulness can improve mental health and academic performance in middle school students.Research, Learning, Behavior, Brain and cognitive sciences, McGovern Institute, School of Science, Neuroscience, Mental health New method classifies brain cells based on electrical signals Making electrophysiology more informative, team shows how to distinguish four classes of cells by spike waveform. Thu, 22 Aug 2019 13:50:01 -0400 David Orenstein | Picower Institute <p>For decades, neuroscientists have relied on a technique for reading out electrical “spikes” of brain activity in live, behaving subjects that tells them very little about the types of cells they are monitoring. In a new study, researchers at the University of Tuebingen, Germany, and MIT’s Picower Institute for Learning and Memory demonstrate a way to increase their insight by distinguishing four distinct classes of cells from that spiking information.</p> <p>The advance offers brain researchers the chance to better understand how different kinds of neurons are contributing to behavior, perception, and memory, and how they are malfunctioning in cases of psychiatric or neurological diseases. Much like mechanics can better understand and troubleshoot a machine by watching how each part works as it runs, neuroscientists, too, are better able to understand the brain when they can tease apart the roles different cells play while it thinks.</p> <p>“We know from anatomical studies that there are multiple types of cells in the brain and if they are there, they must be there for a reason,” says Earl Miller, the Picower Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT, and co-senior author of the paper in <em>Current Biology</em>. “We can’t truly understand the functional circuitry of the brain until we fully understand what different roles these different cell types might play.”</p> <p>Miller collaborated with the Tuebingen-based team of lead author Caterina Trainito, Constantin von Nicolai, and Professor Markus Siegel, co-senior author and a former postdoc in Miller’s lab, to develop the new way to wring more neuron type information from electrophysiology measurements. Those measures track the rapid voltage changes, or spikes, that neurons exhibit as they communicate in circuits, a phenomenon essential for brain function.</p> <p>“Identifying different cell types will be key to understand both local and large-scale information processing in the brain,” Siegel says.</p> <p><strong>Four is greater than two</strong></p> <p>At best, neuroscientists have so far only been able to determine from electrophysiology whether a neuron was excitatory or inhibitory. That’s because they only analyzed the difference in the width of the spike. The typical amount of data in an electrophysiology study — spikes from a few hundred neurons — only supported that single degree of distinction, Miller says.</p> <p>But the new study could go farther because it derives from a dataset of recordings from nearly 2,500 neurons. Miller and Siegel gathered the data years ago at MIT from three regions in the cortex of animals who were performing experimental tasks that integrated perception and decision-making.</p> <p>“We recognized the uncommonly rich resource at our disposal,” Siegel says.</p> <p>Thus, the team decided to put the dataset through a ringer of sophisticated statistical and computational tools to analyze the waveforms of the spikes. Their analysis showed that the waveforms could actually be sorted along two dimensions: how quickly the waveform ranges between its lowest and highest voltage (“trough to peak duration”), and how quickly the voltage changes again afterward, returning from the peak to the normal level (“repolarization time”). Plotting those two factors against each other neatly sorted the cells into four distinct “clusters.” Not only were the clusters evident across the whole dataset, but individually within each of the three cortical regions, too.</p> <p>For the distinction to have any meaning, the four classes of cells would have to have functional differences. To test that, the researchers decided to sort the cells out based on other criteria such as their “firing rate” (how often they spike), whether they tend to fire in bursts, and how variable their intervals are between spikes — all factors in how they participate in and influence the circuits they are connected in. Indeed, the cell classes remained distinct by these measures.</p> <p>In yet another phase of analysis, the cell classes also remained distinguishable as the researchers watched them respond to the animals perceiving and processing visual stimulation. But in this case, they saw the cells play different roles in different regions. A class 1 cell, for example, might respond differently in one region than it did in another.</p> <p>“These cell types are truly different cell types that have different properties,” Miller says. “But they have different functions in different cortical areas because different cortical areas have different functions.”</p> <p><strong>New research capability</strong></p> <p>In the study, the authors speculate about which real neuron types their four mathematically defined classes most closely resemble, but they don’t yet offer a definitive determination. Still, Miller says the finer-grained distinctions the study draws are enough to make him want to reanalyze old neural spiking data to see what new things he can learn.</p> <p>One of Miller’s main research interests is the nature of working memory — our ability to hold information like directions in mind while we make use of it. His research has revealed that it is enabled by a complex interplay of brain regions and precisely timed bursts of neural activity. Now he may be able to figure out how different classes of neurons play specific roles in specific regions to endow us with this handy mental ability.</p> <p>And both Miller’s and Siegel’s labs are particularly interested in different brain rhythms, which are abundant in the brain and likely play a key role for orchestrating communication between neurons. The new results open a powerful new window for them to unravel which role different neuron classes play for these brain rhythms.&nbsp; &nbsp;&nbsp;</p> <p>The U.S. National Institutes of Health, the European Research Council, the Deutsche Forschungsgemeinschaft, and the Center for Integrative Neuroscience provided funding for the study.</p> Picower Professor Earl Miller teamed up with the lab of former postdoc Markus Siegel at the University of Tuebingen to devise a method for learning more about the different kinds of neurons that can be measured with electrodes.Photo: David Orenstein/Picower InstitutePicower Institute, Brain and cognitive sciences, School of Science, Neuroscience, Research, Behavior A new way to deliver drugs with pinpoint targeting Magnetic particles allow drugs to be released at precise times and in specific areas. Mon, 19 Aug 2019 10:59:59 -0400 David L. Chandler | MIT News Office <p>Most pharmaceuticals must either be ingested or injected into the body to do their work. Either way, it takes some time for them to reach their intended targets, and they also tend to spread out to other areas of the body. Now, researchers at MIT and elsewhere have developed a system to deliver medical treatments that can be released at precise times, minimally-invasively, and that ultimately could also deliver those drugs to specifically targeted areas such as a specific group of neurons in the brain.</p> <p>The new approach is based on the use of tiny magnetic particles enclosed within a tiny hollow bubble of lipids (fatty molecules) filled with water, known as a liposome. The drug of choice is encapsulated within these bubbles, and can be released by applying a magnetic field to heat up the particles, allowing the drug to escape from the liposome and into the surrounding tissue.</p> <p>The findings are reported today in the journal <em>Nature Nanotechnology</em> in a paper by MIT postdoc Siyuan Rao, Associate Professor Polina Anikeeva, and 14 others at MIT, Stanford University, Harvard University, and the Swiss Federal Institute of Technology in Zurich.</p> <p>“We wanted a system that could deliver a drug with temporal precision, and could eventually target a particular location,” Anikeeva explains. “And if we don’t want it to be invasive, we need to find a non-invasive way to trigger the release.”</p> <p>Magnetic fields, which can easily penetrate through the body — as demonstrated by detailed internal images produced by magnetic resonance imaging, or MRI — were a natural choice. The hard part was finding materials that could be triggered to heat up by using a very weak magnetic field (about one-hundredth the strength of that used for MRI), in order to prevent damage to the drug or surrounding tissues, Rao says.</p> <p>Rao came up with the idea of taking magnetic nanoparticles, which had already been shown to be capable of being heated by placing them in a magnetic field, and packing them into these spheres called liposomes. These are like little bubbles of lipids, which naturally form a spherical double layer surrounding a water droplet.</p> <p>When placed inside a high-frequency but low-strength magnetic field, the nanoparticles heat up, warming the lipids and making them undergo a transition from solid to liquid, which makes the layer more porous — just enough to let some of the drug molecules escape into the surrounding areas. When the magnetic field is switched off, the lipids re-solidify, preventing further releases. Over time, this process can be repeated, thus releasing doses of the enclosed drug at precisely controlled intervals.</p> <p>The drug carriers were engineered to be stable inside the body at the normal body temperature of 37 degrees Celsius, but able to release their payload of drugs at a temperature of 42 degrees. “So we have a magnetic switch for drug delivery,” and that amount of heat is small enough “so that you don’t cause thermal damage to tissues,” says Anikeeva, who holds appointments in the departments of Materials Science and Engineering and the Brain and Cognitive Sciences.</p> <p>In principle, this technique could also be used to guide the particles to specific, pinpoint locations in the body, using gradients of magnetic fields to push them along, but that aspect of the work is an ongoing project. For now, the researchers have been injecting the particles directly into the target locations, and using the magnetic fields to control the timing of drug releases. “The technology will allow us to address the spatial aspect,” Anikeeva says, but that has not yet been demonstrated.</p> <p>This could enable very precise treatments for a wide variety of conditions, she says. “Many brain disorders are characterized by erroneous activity of certain cells. When neurons are too active or not active enough, that manifests as a disorder, such as Parkinson’s, or depression, or epilepsy.” If a medical team wanted to deliver a drug to a specific patch of neurons and at a particular time, such as when an onset of symptoms is detected, without subjecting the rest of the brain to that drug, this system “could give us a very precise way to treat those conditions,” she says.</p> <p>Rao says that making these nanoparticle-activated liposomes is actually quite a simple process. “We can prepare the liposomes with the particles within minutes in the lab,” she says, and the process should be “very easy to scale up” for manufacturing. And the system is broadly applicable for drug delivery: “we can encapsulate any water-soluble drug,” and with some adaptations, other drugs as well, she says.</p> <p>One key to developing this system was perfecting and calibrating a way of making liposomes of a highly uniform size and composition. This involves mixing a water base with the fatty acid lipid molecules and magnetic nanoparticles and homogenizing them under precisely controlled conditions. Anikeeva compares it to shaking a bottle of salad dressing to get the oil and vinegar mixed, but controlling the timing, direction and strength of the shaking to ensure a precise mixing.</p> <p>Anikeeva says that while her team has focused on neurological disorders, as that is their specialty, the drug delivery system is actually quite general and could be applied to almost any part of the body, for example to deliver cancer drugs, or even to deliver painkillers directly to an affected area instead of delivering them systemically and affecting the whole body. “This could deliver it to where it’s needed, and not deliver it continuously,” but only as needed.</p> <p>Because the magnetic particles themselves are similar to those already in widespread use as contrast agents for MRI scans, the regulatory approval process for their use may be simplified, as their biological compatibility has largely been proven.</p> <p>The team included researchers in MIT’s departments of Materials Science and Engineering and Brain and Cognitive Sciences, as well as the McGovern Institute for Brain Research, the Simons Center for Social Brain, and the Research Laboratory of Electronics; the Harvard University Department of Chemistry and Chemical Biology and the John A. Paulsen School of Engineering and Applied Sciences; Stanford University; and the Swiss Federal Institute of Technology in Zurich. The work was supported by the Simons Postdoctoral Fellowship, the U.S. Defense Advanced Research Projects Agency, the <a href="">Bose Research Grant</a>, and the National Institutes of Health.</p> Diagram illustrates the structure of the tiny bubbles, called liposomes, used to deliver drugs. The blue spheres represent lipids, a kind of fat molecule, surrounding a central cavity containing magnetic nanoparticles (black) and the drug to be delivered (red). When the nanoparticles are heated, the drug can escape into the body.Image courtesy of the researchersDrug delivery, Research, Materials Science and Engineering, Nanoscience and nanotechnology, Neuroscience, Medicine, DMSE, Brain and cognitive sciences, School of Engineering, School of Science, McGovern Institute, Advanced Research Projects Agency (DARPA), National Institutes of Health (NIH) Finding the brain’s compass A powerful method has allowed McGovern researchers to discover how the brain represents the complex world in simple shapes. Mon, 12 Aug 2019 13:00:00 -0400 Sabbi Lall | McGovern Institute for Brain Research <p>The world is constantly bombarding our senses with information, but the ways in which our brain extracts meaning from this information remains elusive. How do neurons transform raw visual input into a mental representation of an object — like a chair or a dog?</p> <p>In work <a href="" target="_blank">published in <em>Nature Neuroscience</em></a>, MIT neuroscientists have identified a brain circuit in mice that distills “high-dimensional” complex information about the environment into a simple abstract object in the brain.</p> <p>“There are no degree markings in the external world; our current head direction has to be extracted, computed, and estimated by the brain,” explains Ila Fiete, an associate member of the McGovern Institute and senior author of the paper. “The approaches we used allowed us to demonstrate the emergence of a low-dimensional concept, essentially an abstract compass in the brain.”</p> <p>This abstract compass, according to the researchers, is a one-dimensional ring that represents the current direction of the head relative to the external world.</p> <div class="cms-placeholder-content-video"></div> <p><strong>Schooling fish</strong></p> <p>Trying to show that a data cloud has a simple shape, like a ring, is a bit like watching a school of fish. By tracking one or two sardines, you might not see a pattern. But if you could map all of the sardines, and transform the noisy dataset into points representing the positions of the whole school of sardines over time, and where each fish is relative to its neighbors, a pattern would emerge. This model would reveal a ring shape, a simple shape formed by the activity of hundreds of individual fish.</p> <p>Fiete, who is also an associate professor in MIT’s Department of Brain and Cognitive Sciences, used a similar approach, called topological modeling, to transform the activity of large populations of noisy neurons into a data cloud in the shape of a ring.</p> <p><strong>Simple and persistent ring</strong></p> <p>Previous work in fly brains revealed a physical ellipsoid ring of neurons representing changes in the direction of the fly’s head, and researchers suspected that such a system might also exist in mammals.</p> <p>In this new mouse study, Fiete and her colleagues measured hours of neural activity from scores of neurons in the anterodorsal thalamic nucleus (ADN) — a region believed to play a role in spatial navigation — as the animals moved freely around their environment. They mapped how the neurons in the ADN circuit fired as the animal’s head changed direction.</p> <p>Together, these data points formed a cloud in the shape of a simple and persistent ring.</p> <p>“This tells us a lot about how neural networks are organized in the brain,” explains Edvard Moser, director of the Kavli Institute of Systems Neuroscience in Norway, who was not involved in the study. “Past data have indirectly pointed towards such a ring-like organization, but only now has it been possible, with the right cell numbers and methods, to demonstrate it convincingly,” says Moser.</p> <p>Their method for characterizing the shape of the data cloud allowed Fiete and colleagues to determine which variable the circuit was devoted to representing, and to decode this variable over time, using only the neural responses.</p> <p>“The animal’s doing really complicated stuff,” explains Fiete, “but this circuit is devoted to integrating the animal’s speed along a one-dimensional compass that encodes head direction. Without a manifold approach, which captures the whole state space, you wouldn’t know that this circuit of thousands of neurons is encoding only this one aspect of the complex behavior, and not encoding any other variables at the same time.”</p> <p>Even during sleep, when the circuit is not being bombarded with external information, this circuit robustly traces out the same one-dimensional ring, as if dreaming of past head-direction trajectories.</p> <p>Further analysis revealed that the ring acts an attractor. If neurons stray off trajectory, they are drawn back to it, quickly correcting the system. This attractor property of the ring means that the representation of head direction in abstract space is reliably stable over time, a key requirement if we are to understand and maintain a stable sense of where our head is relative to the world around us.</p> <p>“In the absence of this ring,” Fiete explains, “we would be lost in the world.”</p> <p><strong>Shaping the future </strong></p> <p>Fiete’s work provides a first glimpse into how complex sensory information is distilled into a simple concept in the mind, and how that representation autonomously corrects errors, making it exquisitely stable.</p> <p>But the implications of this study go beyond coding of head direction.</p> <p>“Similar organization is probably present for other cognitive functions, so the paper is likely to inspire numerous new studies,” says Moser.</p> <p>Fiete sees these analyses and related studies carried out by colleagues at the Norwegian University of Science and Technology, Princeton University, the Weitzman Institute, and elsewhere as fundamental to the future of neural decoding studies.</p> <p>With this approach, she explains, it is possible to extract abstract representations of the mind from the brain, potentially even thoughts and dreams.</p> <p>“We’ve found that the brain deconstructs and represents complex things in the world with simple shapes,” explains Fiete. “Manifold-level analysis can help us to find those shapes, and they almost certainly exist beyond head-direction circuits.”</p> MIT researchers have found a circuit of thousands of neurons in the mammalian brain (blue) traces out a one-dimensional ring during complex navigation behaviors. “In the absence of this ring,” explains Associate Professor Ila Fiete, “we would be lost in the world.”Image: Ila FieteMcGovern Institute, Brain and cognitive sciences, School of Science, Research, Neuroscience Study measures how fast humans react to road hazards In “semiautonomous” cars, older drivers may need more time to take the wheel when responding to the unexpected. Wed, 07 Aug 2019 11:28:59 -0400 Rob Matheson | MIT News Office <p>Imagine you’re sitting in the driver’s seat of an autonomous car, cruising along a highway and staring down at your smartphone. Suddenly, the car detects a moose charging out of the woods and alerts you to take the wheel. Once you look back at the road, how much time will you need to safely avoid the collision?</p> <p>MIT researchers have found an answer in a new study that shows humans need about 390 to 600 milliseconds to detect and react to road hazards, given only a single glance at the road — with younger drivers detecting hazards nearly twice as fast as older drivers. The findings could help developers of autonomous cars ensure they are allowing people enough time to safely take the controls and steer clear of unexpected hazards.</p> <p>Previous studies have examined hazard response times while people kept their eyes on the road and actively searched for hazards in videos. In this new study, recently published in the <em>Journal of Experimental Psychology: General</em>, the researchers examined how quickly drivers can recognize a road hazard if they’ve just looked back at the road. That’s a more realistic scenario for the coming age of semiautonomous cars that require human intervention and may unexpectedly hand over control to human drivers when facing an imminent hazard.</p> <p>“You’re looking away from the road, and when you look back, you have no idea what’s going on around you at first glance,” says lead author Benjamin Wolfe, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We wanted to know how long it takes you to say, ‘A moose is walking into the road over there, and if I don’t do something about it, I’m going to take a moose to the face.’”</p> <p>For their study, the researchers built a unique dataset that includes YouTube dashcam videos of drivers responding to road hazards — such as objects falling off truck beds, moose running into the road, 18-wheelers toppling over, and sheets of ice flying off car roofs — and other videos without road hazards. Participants were shown split-second snippets of the videos, in between blank screens. In one test, they indicated if they detected hazards in the videos. In another test, they indicated if they would react by turning left or right to avoid a hazard.</p> <p>The results indicate that younger drivers are quicker at both tasks: Older drivers (55 to 69 years old) required 403 milliseconds to detect hazards in videos, and 605 milliseconds to choose how they would avoid the hazard. Younger drivers (20 to 25 years old) only needed 220 milliseconds to detect and 388 milliseconds to choose.</p> <p>Those age results are important, Wolfe says. When autonomous vehicles are ready to hit the road, they’ll most likely be expensive. “And who is more likely to buy expensive vehicles? Older drivers,” he says. “If you build an autonomous vehicle system around the presumed capabilities of reaction times of young drivers, that doesn’t reflect the time older drivers need. In that case, you’ve made a system that’s unsafe for older drivers.”</p> <p>Joining Wolfe on the paper are: Bobbie Seppelt, Bruce Mehler, Bryan Reimer, of the MIT AgeLab, and Ruth Rosenholtz of the Department of Brain and Cognitive Sciences and CSAIL.</p> <p><strong>Playing “the worst video game ever”</strong></p> <p>In the study, 49 participants sat in front of a large screen that closely matched the visual angle and viewing distance for a driver, and watched 200 videos from the Road Hazard Stimuli dataset for each test. They were given a toy wheel, brake, and gas pedals to indicate their responses. “Think of it as the worst video game ever,” Wolfe says.</p> <p>The dataset includes about 500 eight-second dashcam videos of a variety of road conditions and environments. About half of the videos contain events leading to collisions or near collisions. The other half try to closely match each of those driving conditions, but without any hazards. Each video is annotated at two critical points: the frame when a hazard becomes apparent, and the first frame of the driver’s response, such as braking or swerving.</p> <p>Before each video, participants were shown a split-second white noise mask. When that mask disappeared, participants saw a snippet of a random video that did or did not contain an imminent hazard. After the video, another mask appeared. Directly following that, participants stepped on the brake if they saw a hazard or the gas if they didn’t. There was then another split-second pause on a black screen before the next mask popped up.</p> <p>When participants started the experiment, the first video they saw was shown for 750 milliseconds. But the duration changed during each test, depending on the participants’ responses. If a participant responded incorrectly to one video, the next video’s duration would extend slightly. If they responded correctly, it would shorten. In the end, durations ranged from a single frame (33 milliseconds) up to one second. “If they got it wrong, we assumed they didn’t have enough information, so we made the next video longer. If they got it right, we assumed they could do with less information, so made it shorter,” Wolfe says.</p> <p>The second task used the same setup to record how quickly participants could choose a response to a hazard. For that, the researchers used a subset of videos where they knew the response was to turn left or right. The video stops, and the mask appears on the first frame that the driver begins to react. Then, participants turned the wheel either left or right to indicate where they’d steer.</p> <p>“It’s not enough to say, ‘I know something fell into road in my lane.’ You need to understand that there’s a shoulder to the right and a car in the next lane that I can’t accelerate into, because I’ll have a collision,” Wolfe says.</p> <p><strong>More time needed</strong></p> <p>The MIT study didn’t record how long it actually takes people to, say, physically look up from their phones or turn a wheel. Instead, it showed people need up to 600 milliseconds to just detect and react to a hazard, while having no context about the environment.</p> <p>Wolfe thinks that’s concerning for autonomous vehicles, since they may not give humans adequate time to respond, especially under panic conditions. Other studies, for instance, have found that it takes people who are driving normally, with their eyes on the road, about 1.5 seconds to physically avoid road hazards, starting from initial detection.</p> <p>Driverless cars will already require a couple hundred milliseconds to alert a driver to a hazard, Wolfe says. “That already bites into the 1.5 seconds,” he says. “If you look up from your phone, it may take an additional few hundred milliseconds to move your eyes and head. That doesn’t even get into time it’ll take to reassert control and brake or steer. Then, it starts to get really worrying.”</p> <p>Next, the researchers are studying how well peripheral vision helps in detecting hazards. Participants will be asked to stare at a blank part of the screen — indicating where a smartphone may be mounted on a windshield — and similarly pump the brakes when they notice a road hazard.</p> <p>The work is sponsored, in part, by the Toyota Research Institute. &nbsp;</p> Researchers are examining how quickly drivers can detect and correctly respond to road hazards, to improve safety of “semiautonomous” cars.Image: stock imageResearch, Computer science and technology, Autonomous vehicles, Automobiles, Transportation, Industry, Behavior, Technology and society, AgeLab, Computer Science and Artificial Intelligence Laboratory (CSAIL), Brain and cognitive sciences, School of Science, School of Engineering How brain cells pick which connections to keep Novel study shows protein CPG15 acts as a molecular proxy of experience to mark synapses for stabilization. Tue, 06 Aug 2019 11:00:00 -0400 David Orenstein | Picower Institute for Learning and Memory <div> <div> <div> <div> <p>Brain cells, or neurons, constantly tinker with their circuit connections, a crucial feature that allows the brain to store and process information. While neurons frequently test out new potential partners through transient contacts, only a fraction of fledging junctions, called synapses, are selected to become permanent. &nbsp;</p> </div> </div> </div> </div> <p>The major criterion for excitatory synapse selection is based on how well they engage in response to experience-driven neural activity, but how such selection is implemented at the molecular level has been unclear. In a new study, MIT neuroscientists have identified the gene and protein, CPG15, that allows experience to tap a synapse as a keeper.</p> <p>In a series of novel experiments described in <em>Cell Reports,</em> the team at MIT’s Picower Institute for Learning and Memory used multi-spectral, high-resolution two-photon microscopy to literally watch potential synapses come and go in the visual cortex of mice — both in the light, or normal visual experience, and in the darkness, where there is no visual input. By comparing observations made in normal mice and ones engineered to lack CPG15, they were able to show that the protein is required in order for visual experience to facilitate the transition of nascent excitatory synapses to permanence.</p> <p>Mice engineered to lack CPG15 only exhibit one behavioral deficiency: They learn much more slowly than normal mice, says senior author Elly Nedivi, the William R. (1964) and Linda R. Young Professor of Neuroscience in the Picower Institute and a professor of brain and cognitive sciences at MIT. They need more trials and repetitions to learn associations that other mice can learn quickly. The new study suggests that’s because without CPG15, they must rely on circuits where synapses simply happened to take hold, rather than on a circuit architecture that has been refined by experience for optimal efficiency.</p> <p>“Learning and memory are really specific manifestations of our brain’s ability in general to constantly adapt and change in response to our environment,” Nedivi says. “It’s not that the circuits aren’t there in mice lacking CPG15, they just don’t have that feature — which is really important — of being optimized through use.”</p> <p><strong>Watching in light and darkness</strong></p> <p>The first experiment reported in the paper, led by former MIT postdoc Jaichandar Subramanian, who is now an assistant professor at the University of Kansas, is a contribution to neuroscience in and of itself, Nedivi says. The novel labeling and imaging technologies implemented in the study, she says, allowed tracking key events in synapse formation with unprecedented spatial and temporal resolution. The study resolved the emergence of “dendritic spines,” which are the structural protrusions on which excitatory synapses are formed, and the recruitment of the synaptic scaffold, PSD95, that signals that a synapse is there to stay.</p> <p>The team tracked specially labeled neurons in the visual cortex of mice after normal visual experience, and after two weeks in darkness. To their surprise, they saw that spines would routinely arise and then typically disappear again at the same rate regardless of whether the mice were in light or darkness. This careful scrutiny of spines confirmed that experience doesn’t matter for spine formation, Nedivi said. That upends a common assumption in the field, which held that experience was necessary for spines to even emerge.</p> <p>By keeping track of the presence of PSD95 they could confirm that the synapses that became stabilized during normal visual experience were the ones that had accumulated that protein. But the question remained: How does experience drive PSD95 to the synapse? The team hypothesized that CPG15, which is activity dependent and associated with synapse stabilization, does that job.</p> <p><strong>CPG15 represents experience</strong></p> <p>To investigate that, they repeated the same light-versus-dark experiences, but this time in mice engineered to lack CPG15. In the normal mice, there was much more PSD95 recruitment during the light phase than during the dark, but in the mice without CPG15, the experience of seeing in the light never made a difference. It was as if CPG15-less mice in the light were like normal mice in the dark.</p> <p>Later they tried another experiment testing whether the low PSD95 recruitment seen when normal mice were in the dark could be rescued by exogenous expression of CPG15. Indeed, PSD95 recruitment shot up, as if the animals were exposed to visual experience. This showed that CPG15 not only carries the message of experience in the light, it can actually substitute for it in the dark, essentially “tricking” PSD95 into acting as if experience had called upon it.</p> <p>“This is a very exciting result, because it shows that CPG15 is not just required for experience-dependent synapse selection, but it’s also sufficient,” says Nedivi, “That’s unique in relation to all other molecules that are involved in synaptic plasticity.”</p> <p><strong>A new model and method</strong></p> <p>In all, the paper’s data allowed Nedivi to propose a new model of experience-dependent synapse stabilization: Regardless of neural activity or experience, spines emerge with fledgling excitatory synapses and the receptors needed for further development. If activity and experience send CPG15 their way, that draws in PSD95 and the synapse stabilizes. If experience doesn’t involve the synapse, it gets no CPG15, very likely no PSD95, and the spine withers away.</p> <p>The paper potentially has significance beyond the findings about experience-dependent synapse stabilization, Nedivi says. The method it describes of closely monitoring the growth or withering of spines and synapses amid a manipulation (like knocking out or modifying a gene) allows for a whole raft of studies in which examining how a gene, or a drug, or other factors affect synapses.</p> <p>“You can apply this to any disease model and use this very sensitive tool for seeing what might be wrong at the synapse,” she says.</p> <p>In addition to Nedivi and Subramanian, the paper’s other authors are Katrin Michel and Marc Benoit.</p> <p>The National Institutes of Health and the JPB Foundation provided support for the research.</p> Images from two-photon microscopy track the comings and goings of dendritic spines (key infrastructure for neural connections called synapses) in a mouse brain. Top row: A spine present on day 14 is gone by two weeks later. Bottom row: A spine emerges around day 28 and sticks around.Photo: Nedivi Lab/Picower InstitutePicower Institute, Brain and cognitive sciences, School of Science, Neuroscience, Memory, Biology, Research, Proteins Speeding up drug discovery for brain diseases Whitehead Institute team finds drugs that activate a key brain gene; initial tests in cells and mice show promise for rare, untreatable neurodevelopmental disorder. Wed, 31 Jul 2019 14:25:01 -0400 Nicole Davis <p>A research team led by Whitehead Institute scientists has identified 30 distinct chemical compounds — 20 of which are drugs undergoing clinical trial or have already been approved by the FDA — that boost the protein production activity of a critical gene in the brain and improve symptoms of Rett syndrome, a rare neurodevelopmental condition that often provokes autism-like behaviors in patients. The new study, conducted in human cells and mice, helps illuminate the biology of an important gene, called KCC2, which is implicated in a variety of brain diseases, including autism, epilepsy, schizophrenia, and depression. The researchers’ findings, published in the July 31 online issue of <em>Science Translational Medicine</em>, could help spur the development of new treatments for a host of devastating brain disorders.</p> <p>“There’s increasing evidence that KCC2 plays important roles in several different disorders of the brain, suggesting that it may act as a common driver of neurological dysfunction,” says senior author <a href="">Rudolf</a><a href=""> Jaenisch</a>, a founding member of Whitehead Institute and professor of biology at MIT. “These drugs we’ve identified may help speed up the development of much-needed treatments.”</p> <p>KCC2 works exclusively in the brain and spinal cord, carrying ions in and out of specialized cells known as neurons. This shuttling of electrically charged molecules helps maintain the cells’ electrochemical makeup, enabling neurons to fire when they need to and to remain idle when they don’t. If this delicate balance is upset, brain function and development go awry.</p> <p>Disruptions in KCC2 function have been linked to several human brain disorders, including Rett syndrome (RTT), a progressive and often debilitating disorder that typically emerges early in life in girls and can involve disordered movement, seizures, and communication difficulties. Currently, there is no effective treatment for RTT.</p> <p>Jaenisch and his colleagues, led by first author Xin Tang, devised a high-throughput screen assay to uncover drugs that increase KCC2 gene activity. Using CRISPR/Cas9 genome editing and stem cell technologies, they engineered human neurons to provide rapid readouts of the amount of KCC2 protein produced. The researchers created these so-called reporter cells from both healthy human neurons as well as RTT neurons that carry disease-causing mutations in the MECP2 gene. These reporter neurons were then fed into a drug-screening pipeline to find chemical compounds that can enhance KCC2 gene activity.</p> <p>Tang and his colleagues screened over 900 chemical compounds, focusing on those that have been FDA-approved for use in other conditions, such as cancer, or have undergone at least some level of clinical testing. “The beauty of this approach is that many of these drugs have been studied in the context of non-brain diseases, so the mechanisms of action are known,” says Tang. “Such molecular insights enable us to learn how the KCC2 gene is regulated in neurons, while also identifying compounds with potential therapeutic value.”</p> <p>The Whitehead Institute team identified a total of 30 drugs with KCC2-enhancing activity. These compounds, referred to as KEECs (short for KCC2 expression-enhancing compounds), work in a variety of ways. Some block a molecular pathway, called FLT3, which is found to be overactive in some forms of leukemia. Others inhibit the GSK3b pathway that has been implicated in several brain diseases. Another KEEC acts on SIRT1, which plays a key role in a variety of biological processes, including aging.</p> <p>In followup experiments, the researchers exposed RTT neurons and mouse models to KEEC treatment and found that some compounds can reverse certain defects associated with the disease, including abnormalities in neuronal signaling, breathing, and movement. These efforts were made possible by a collaboration with <a href="">Mriganka Sur’s</a> group at the Picower Institute for Learning and Memory, in which Keji Li and colleagues led the behavioral experiments in mice that were essential for revealing the drugs’ potency.</p> <p>“Our findings illustrate the power of an unbiased approach for discovering drugs that could significantly improve the treatment of neurological disease,” says Jaenisch. “And because we are starting with known drugs, the path to clinical translation is likely to be much shorter.”</p> <p>In addition to speeding up drug development for Rett syndrome, the researchers’ unique drug-screening strategy, which harnesses an engineered gene-specific reporter to unearth promising drugs, can also be applied to other important disease-related genes in the brain. “Many seemingly distinct brain diseases share common root causes of abnormal gene expression or disrupted signaling pathways,” says Tang. “We believe our method has broad applicability and could help catalyze therapeutic discovery for a wide range of neurological conditions.”</p> <p>Support for this work was provided by the National Institutes of Health, the Simons Foundation Autism Research Initiative, the Simons Center for the Social Brain at MIT, the Rett Syndrome Research Trust, the International Rett Syndrome Foundation, the Damon Runyon Cancer Foundation, and the National Cancer Institute.</p> Image: Steven Lee/Whitehead InstituteWhitehead Institute, Picower Institute, School of Science, Behavior, Biology, Brain and cognitive sciences, CRISPR, Development, Disease, Genetics, Mental health, National Institutes of Health (NIH), Pharmaceuticals, Research, Proteins, Drug development Neuroscientists identify brain region linked to altered social interactions in autism model In a mouse model, restoring activity of a specific forebrain region reverses social traits associated with autism. Fri, 26 Jul 2019 10:23:01 -0400 Sabbi Lall | McGovern Institute for Brain Research <p>Although psychiatric disorders can be linked to particular genes, the brain regions and mechanisms underlying particular disorders are not well-understood. Mutations or deletions of the SHANK3 gene are strongly associated with autism spectrum disorder (ASD) and a related rare disorder called Phelan-McDermid syndrome. Mice with SHANK3 mutations also display some of the traits associated with autism, including avoidance of social interactions, but the brain regions responsible for this behavior have not been identified.</p> <p>A new study by neuroscientists at MIT and colleagues in China provides clues to the neural circuits underlying social deficits associated with ASD. The <a href="">paper</a>, published in <em>Nature Neuroscience</em>, found that structural and functional impairments in the anterior cingulate cortex (ACC) of SHANK3 mutant mice are linked to altered social interactions.</p> <p>“Neurobiological mechanisms of social deficits are very complex and involve many brain regions, even in a mouse model,” explains <a href="">Guoping Feng</a>, the James W. and Patricia T. Poitras Professor at MIT and one of the senior authors of the study. “These findings add another piece of the puzzle to mapping the neural circuits responsible for this social deficit in ASD models.”</p> <p>The <em>Nature Neuroscience</em> paper is the result of a collaboration between Feng, who is also an investigator at MIT’s McGovern Institute and a senior scientist in the Broad Institute’s Stanley Center for Psychiatric Research, and Wenting Wang and Shengxi Wu at the Fourth Military Medical University, Xi’an, China.</p> <p>A number of brain regions have been implicated in social interactions, including the prefrontal cortex (PFC) and its projections to brain regions including the nucleus accumbens and habenula, but these studies failed to definitively link the PFC to altered social interactions seen in SHANK3 knockout mice.</p> <p>In the new study, the authors instead focused on the ACC, a brain region noted for its role in social functions in humans and animal models. The ACC is also known to play a role in fundamental cognitive processes, including cost-benefit calculation, motivation, and decision making.</p> <p>In mice lacking SHANK3, the researchers found structural and functional disruptions at the synapses, or connections, between excitatory neurons in the ACC. The researchers went on to show that the loss of SHANK3 in excitatory ACC neurons alone was enough to disrupt communication between these neurons and led to unusually reduced activity of these neurons during behavioral tasks reflecting social interaction.</p> <p>Having implicated these ACC neurons in social preferences and interactions in SHANK3 knockout mice, the authors then tested whether activating these same neurons could rescue these behaviors. Using optogenetics and specfic drugs, the researchers activated the ACC neurons and found improved social behavior in the SHANK3 mutant mice.</p> <p>“Next, we are planning to explore brain regions downstream of the ACC that modulate social behavior in normal mice and models of autism,” explains Wenting Wang, co-corresponding author on the study. “This will help us to better understand the neural mechanisms of social behavior, as well as social deficits in neurodevelopmental disorders.”</p> <p>Previous clinical studies reported that anatomical structures in the ACC were altered and/or dysfunctional in people with ASD, an initial indication that the findings from SHANK3 mice may also hold true in these individuals.</p> <p>The research was funded, in part, by the Natural Science Foundation of China. Guoping Feng was supported by NIMH grant no. MH097104, the&nbsp;<a href=""> Poitras Center for Psychiatric Disorders Research at the McGovern Institute at MIT</a>, and the <a href="">Hock E. Tan and K. Lisa Yang Center for Autism Research </a>at the McGovern Institute at MIT.</p> SHANK3 (green) is expressed along with a neural marker (NeuN) in the mouse anterior cingulate cortex. Image: Guoping FengMcGovern Institute, Broad Institute, Brain and cognitive sciences, School of Science, Autism, Genetics, Research, Neuroscience New faces in the School of Science faculty Departments of Biology, Brain and Cognitive Sciences, Chemistry, and Physics welcome new faculty members. Tue, 23 Jul 2019 16:00:01 -0400 School of Science <p>This fall, the School of Science will welcome seven new members joining the faculty in the departments of Biology, Brain and Cognitive Sciences, Chemistry, and Physics.</p> <p><a href="">Netta Engelhardt</a> studies gravitational aspects of quantum gravity with an emphasis on string theory She looks into the thermodynamic behavior of black holes and the idea that singularities are always hidden behind event horizons. Engelhardt joins the Department of Physics as an assistant professor. Engelhardt’s BS is in physics and mathematics from Brandeis University, and she received her PhD in physics from the University of California at Santa Barbara. Previously, she was a member of the Princeton Gravity Initiative at Princeton University. Engelhardt is also affiliated with the MIT Center for Theoretical Physics and the Laboratory for Nuclear Science.</p> <p><a href="">Evelina Fedorenko</a> investigates how our brains process language. She has developed novel analytic approaches for functional magnetic resonance imaging (fMRI) and other brain imaging techniques to help answer the questions of how the language processing network functions and how it relates to other networks in the brain. She works with both neurotypical individuals and individuals with brain disorders. Fedorenko joins the Department of Brain and Cognitive Sciences as an assistant professor. She received her BA from Harvard University in linguistics and psychology and then completed her doctoral studies at MIT in 2007. After graduating from MIT, Fedorenko worked as a postdoc and then as a research scientist at the McGovern Institute for Brain Research. In 2014, she joined the faculty at Massachusetts General Hospital and Harvard Medical School, where she was an associate researcher and an assistant professor, respectively. She is also a member of the McGovern Institute.</p> <p><a href="">Erin Kara</a> researches black holes. She looks into their formation and how they grow and impact the environments around them, particularly with respect to event horizons. To do this, she employs X-ray spectral timing observations. Kara is welcomed by the Department of Physics as an assistant professor. Kara joins MIT from the University of Maryland and the NASA Goddard Space flight Center where she was a Hubble Postdoctoral Fellow and a Joint Space-Science Institute Fellow. She received her undergraduate degree from Barnard College in 2011, and an MPhil in astrophysics and PhD in astronomy from Cambridge University. She is also a member of the MIT Kavli Institute for Astrophysics and Space Research.</p> <p><a href="">Pulin Li</a> is a developmental and synthetic biologist. Her work aims to lead to methods that might allow the programming of cells that could produce tissues and cells in regenerative medicine. She and her lab group accomplish this by using bioengineering tools, making quantitative measurements of genetic circuits in natural systems and invoking mathematical modelling. Li is joining the MIT community as an assistant professor in the Department of Biology. Her bachelor’s degree was obtained at Peking University, and she completed a PhD in chemical biology at Harvard University. Prior to her appointment at MIT, she was a postdoc at Caltech. Li is also a member of the Whitehead Institute for Biomedical Research.</p> <p><a href="">Morgan Sheng</a> focuses on the structure, function, and turnover of synapses, the junctions that allow communication between brain cells. His discoveries have improved our understanding of the molecular basis of cognitive function and diseases of the nervous system, such as autism, Alzheimer’s disease, and dementia. Being both a physician and a scientist, he incorporates genetic as well as biological insights to aid the study and treatment of mental illnesses and neurodegenerative diseases. He rejoins the Department of Brain and Cognitive Sciences (BCS), returning as a professor of neuroscience, a position he also held from 2001 to 2008. At that time, he was a member of the Picower Institute for Learning and Memory, a joint appointee in the Department of Biology, and an investigator of the Howard Hughes Medical Institute. Sheng earned his PhD from Harvard University in 1990, completed a postdoc at the University of California at San Francisco in 1994, and finished his medical training with a residency in London in 1986. From 1994 to 2001, he researched molecular and cellular neuroscience at Massachusetts General Hospital and Harvard Medical School. From 2008 to 2019 he was vice president of neuroscience at Genentech, a leading biotech company. In addition to his faculty appointment in BCS, Sheng is core institute member and co-director of the Stanley Center for Psychiatric Research at the Broad Institute of MIT and Harvard, as well as an affiliate member of the McGovern Institute and the Picower Institute.</p> <p><a href="">Seychelle Vos</a> studies genome organization and its effect on gene expression at the intersection of biochemistry and genetics. Vos uses X-ray crystallography, cryo-electron microscopy, and biophysical approaches to understand how transcription is physically coupled to the genome’s organization and structure. She joins the Department of Biology as an assistant professor after completing a postdoc at the Max Plank Institute for Biophysical Chemistry. Vos received her BS in genetics in 2008 from the University of Georgia and her PhD in molecular and cell biology in 2013 from the University of California at Berkeley.</p> <p><a href="">Xiao Wang</a> is a chemist and molecular engineer working to improve our understanding of biology and human health. She focuses on brain function and dysfunction, producing and applying new chemical, biophysical, and genomic tools at the molecular level. Previously, she focused on RNA modifications and how they impact cellular function. Wang is joining MIT as an assistant professor in the Department of Chemistry. She was previously a postdoc of the Life Science Research Foundation at Stanford University. Wang received her BS in chemistry and molecular engineering from Peking University in 2010 and her PhD in chemistry from the University of Chicago in 2015. She is also a core member of the Broad Institute of MIT and Harvard.</p> New faculty of the MIT School of Science: (clockwise from top left) Netta Engelhardt, Evelina Fedorenko, Erin Kara, Pulin Li, Morgan Sheng, Seychelle Vos, and Xiao Wang.Brain and cognitive sciences, School of Science, McGovern Institute, Picower Institute, Biology, Broad Institute, Chemistry, Faculty, Physics, Laboratory for Nuclear Science, Whitehead Institute, Kavli Institute Celebrating a curious mind: Steven Keating 1988-2019 Steven Keating SM&#039;12, PhD &#039;16 inspired millions with his research-driven approach to battling cancer and his advocacy for open patient health data. Mon, 22 Jul 2019 14:35:01 -0400 Mary Beth Gallagher | Department of Mechanical Engineering <p>Alumnus Steven John Keating SM '12, PhD '16 passed away from brain cancer on July 19 at the age of 31.</p> <p>Keating received his master’s degree and PhD in mechanical engineering and was a member of the MIT Media Lab’s Mediated Matter team. He inspired countless people with his courageous, research-driven approach to battling cancer and was a champion for patient access to health data.&nbsp;</p> <p>Curiosity was a driving force in Keating’s life. Growing up in Calgary, Canada, he spent a large portion of his childhood tinkering with and building devices. This predilection for making led to not only his love of engineering, but also his affinity for film and photography. As an undergraduate at Queen’s University in Kingston, Canada, Keating pursued his twin passions — earning a dual degree in mechanical and materials engineering alongside a degree in film and media.</p> <p>In 2010, Keating fulfilled his lifelong dream of attending MIT and enrolled as a graduate student studying mechanical engineering. He joined the <a href="">Media Lab’s Mediated Matter</a> group under his co-advisor Neri Oxman, the Sony Corporation Career Development Associate Professor of Media Arts and Sciences.</p> <p>At the Media Lab, Keating conducted research on additive manufacturing and synthetic biology. He pushed the limits of 3-D printing and <a href="">developed a technology</a> that could 3-D print the foundation of a building. This technology was recently acquired by NASA for potential applications in their pursuit of landing on the moon by 2024.</p> <p>“Steve utilized humor while solving equations and inspired a sense of empathy when discussing ethical issues associated with robotics and synthetic biology,” Oxman reflects. “The projects he left behind are very much alive and will continue to have meaningful impact on the physical and societal landscapes we inhabit.”</p> <p>In the <a href="">Department of Mechanical Engineering</a>, Keating served as a teaching assistant for the senior capstone class 2.009 (Product Engineering Processes), alongside his co-advisor, David Wallace, professor of mechanical engineering. He also helped teach the popular introductory course 2.00b (Toy Product Design) with Wallace.</p> <p>“Steve had an infectious kindness and curiosity that elevated those around him, exploring simply for the joy and thrill of learning,” says Wallace. “His teaching contributions in our freshman toy product design and senior product engineering design classes made an enduring impact.”</p> <p>Four years into his graduate studies at MIT, Keating’s world was turned upside down when a baseball-sized tumor was found in his brain. The innate curiosity that had brought him to MIT ultimately led to his diagnosis. In a <a href="">2014 speech at the Koch Institute</a>, Keating recalled: “Curiosity is why we are here [at MIT] doing research, and ironically that’s how I found my tumor.”</p> <p>As an undergraduate in 2007, Keating had participated in a brain study purely out of curiosity. His MRI scans revealed a small dime-sized abnormality located near the smell center of his brain. This knowledge prompted Keating to seek medical attention when, in the summer of 2014, he began smelling vinegar and getting headaches. A new MRI scan showed a low-grade glioma in the frontal left lobe of his brain that would require immediate surgery.</p> <p>After receiving this news, Yoel Fink, professor of materials science and engineering, entered Keating’s life. Fink had previously developed a fiber optic scalpel that enabled minimally invasive surgery on brain tumors. As a result, he was connected to the top neurosurgeons in the world. Fink put Keating in touch with E. Antonio Chiocca, neurosurgeon-in-chief and chair of the Department of Neurosurgery at <a href="" target="_blank">Brigham and Women’s Hospital</a> in Boston. Chiocca performed the surgery to remove his tumor.</p> <p>In an email to his friends and family in advance of the surgery, Keating wrote: “The world is a lovely, splendid, and fascinating place. But most of all, to me, it is beautifully curious.”</p> <p>Keating proved to be anything but an average patient. “Steve confronted his disease like a true “MITer”: He studied it, researched it, applied his creativity and interest in the sciences and engineering to see how best to face this enemy,” explains Chiocca.</p> <p>Ever the researcher, Keating craved every possible data point he could get about his diagnosis and treatment. Upon learning that accessing medical data required the approval of a medical doctor, he enrolled in an MD program while finishing up his PhD – earning him the nickname ‘MacGyver’ among his colleagues in the Media Lab.</p> <p>Keating poured over footage of his 10-hour surgery, analyzed his own MRI scans, and had his microbiome sequenced. He even 3-D printed a model of his tumor, which he gifted his friends and family as a very unique Christmas ornament. This model led to a partnership with colleagues at the Media Lab and Harvard University to develop <a href="" target="_blank">a new method to 3-D print</a> more detailed models from medical images.</p> <p>Keating collected 200 gigabytes of his own medical data. Given that knowledge of his own MRI scans and medical data led to his timely diagnosis, he became a staunch advocate for open-sourcing patient data. He wanted to empower patients to gain access to their own health information.</p> <p>“Steve became a voice for patients’ desires to have access and own the data for their disease,” adds Chiocca. “He did this with humility, courage, joy and affability.”</p> <p>Keating’s crusade on behalf of patients everywhere led to a <a href=""><em>New York Times</em> article</a> about his efforts in March 2015. His story was covered widely by the media and inspired millions of people. He gave a <a href="">TEDx Talk</a> about his experiences, joined the Federal Precision Medicine Task Force, and received an invitation to the White House by President Barack Obama.</p> <p>“For him it was all about awareness — he was willing to give up his privacy and share his data with the world to advance the likelihood of an eventual cure for this disease,” says Fink, who along with Oxman remained close to Keating and his family throughout the years.</p> <p>In remission thanks to the efforts of Chiocca and his team of doctors, Keating continued his work with Oxman in the Mediated Matter group. “Even and especially while battling cancer, Steve remained noble in his ways,” adds Oxman. “Whether taking the initiative on group-based work or gathering the team to discuss a new publication, it was humbling to watch him help others as he battled his challenging condition.”</p> <p>Keating graduated with his PhD in 2016. He moved to Silicon Valley, where he worked as a design engineer at Apple.</p> <p>Last summer after a routine check-up, he was told he had glioblastoma, a malignant and incurable form of brain cancer. Even after receiving this devastating diagnosis, Keating never lost sight of the impact he could have on others. He tirelessly advocated for patient access to medical data in an effort to save the lives of others, all while undergoing multiple experimental trials and courageously fighting for his own life.</p> <p>“A defining element of his character was to be gracious and giving while he was fighting the battle of his life,” says Fink.</p> <p>Though Keating ultimately succumbed to the disease, others will take up his mantle in the fight for a cure and greater access to patient data. Two days before he passed away, the first ever Glioblastoma Awareness Day was observed to raise awareness and honor those who have lost their lives to this aggressive form of brain cancer.</p> <p>“Steve never let the knowledge that glioblastoma remains incurable stop him from living his life to the fullest without anger and disappointment,” adds Chiocca. “As cancer scientists, we will continue to research this disease so that Steve’s fight remains our fight.“</p> <p>His passion and spirit will live on with his former colleagues at MIT. “Steven’s presence was luminous and so is his legacy,” says Oxman. “My team and I are honored to continue where our very own ‘MacGyver’ left off.”</p> <p>Keating is survived by his parents, John and Lynn, and his sister, Laura. In lieu of a traditional memorial service, Keating’s family has created a "cyber celebration" as a forum for people to honor and celebrate his inspiring, curiosity-driven life. If you would like to share memories, stories, pictures or short videos honoring Keating’s memory, please visit <a href="" target="_blank"></a>.</p> MIT alumnus Steven Keating has died. Keating conducted research on additive manufacturing and synthetic biology. He developed a technology that could 3-D print the foundation of a building.Image: Tony Pulsone/MITAlumni/ae, Obituaries, Mechanical engineering, Media Lab, Koch Institute, Health, Health care, Neuroscience, Public health, Synthetic biology, Open access, 3-D printing, Additive manufacturing, Biology, Medicine, School of Engineering, School of Architecture and Planning