Featured Image: An artificial neural network, one of our best computational tools for uncovering the circuitry of our brains. Courtesy Wikimedia Commons.
We’re jam packed with science on WRPB this week! Olga Lositsky, graduate student in Princeton’s Neuroscience Institute, graced our show with her thorough understanding of today’s biggest research questions on the brain. We broke down her work on how we make decisions and store memories, which involves both computer models of neurons and psychological experiments. Later in the show, Stevie brings us details on Lucy’s demise and signals that probably aren’t from aliens–and Ingrid Ockert closes it out with her book review on engineers as activists.
Olga began studying neuroscience because its central questions unite other areas of science: psychology, philosophy, and biology come together once we learn how signals propagate around the brain. In fact, scientists categorize neuroscience research into levels of “abstraction.” At the most fundamental level, we can study the machinery of one neuron firing and affecting another; larger than that, we examine groups of neurons that make circuits for more complicated tasks; and finally, looking at the brain as a whole, researchers watch how signals travel from one part of the brain to another with tools like EEG or fMRI, giving a more global understanding. All of these levels pose questions differently, and they often use different terminology–but if we don’t understand each part alone, we’ll never grasp how groups of single neurons can make up a system as complicated as the human personality.
To connect these sub-fields of neuroscience, Olga uses computational modeling as an important tool. With machine learning techniques (which we’ve discussed on this show before), small units called nodes can pass information from layer to layer in a computer program. In the end, the program takes an input and provides an output, just like our brain sees some input and can think of some memory or move a muscle as an output. By changing the architecture of the neural network and observing its behavior, we get clues about what algorithms the brain uses to learn and remember.
Other technologies have been instrumental in unveiling the brain’s inner workings. Making better robots means programming them to learn, and the codes that artificial intelligence engineers use here have given insight into how the brain processes information. Robots make predictions about the way the world works and have to correct themselves when their predictions are wrong–and now, we think that dopamine might have something to do with the way our brain deals with our incorrect predictions.
In fact, the way we learn about general rules might be different than the way we learn about exceptions, or things about the world that surprise us. Some scientists are postulating that we connect these two ways of learning through sleep. For example, learning about birds that can’t fly (like penguins and ostriches) shouldn’t interfere with our understanding that most birds do fly–and maybe sleeping helps us reconcile these exceptions to the rule. But Olga emphasizes that we’re still only developing these ideas, and the new field of neuroscience has a long way to go until we answer these questions definitively.
One fundamental area that we need to learn more about, Olga points out, is exactly what happens within a single neuron when we learn. Maybe storing a memory is more than just building connections between neurons; it might also change the structure of each neuron individually. Such a new idea could have a huge impact on how we simulate the brain on a computer.
In the second half of the interview, we pick apart a few dramatic ideas that are being debated in neuroscience today. Firstly: Why do we make bad decisions? Whether it’s eating another candy bar or refusing to do our homework, humans might seem wired to choose in ways that harm ourselves in the long run. In fact, though, there’s a better question to ask that can tell us why we choose these self-defeating things. Evolutionarily, why might it be optimal for us to make this “bad” choice? For example, a candy bar might have been a great way for a starving hunter-gatherer to shore up calories for the winter; avoiding homework might allow us to go on an adventure and learn about the world. There are evolutionary reasons why we make decisions we ought not to, which can help us understand them.
Secondly, we all know that neurons that “fire together wire together;” that is, the connection between two neurons strengthens as they get used simultaneously. Actually, this might not always be the case. Olga describes how subliminal reminders that fire neurons weakly might actually weaken the connection between them, basically helping you to forget. Again, this is new and preliminary research, but it makes us consider an even more complicated picture of the brain.
Finally, how do we estimate how much time passed between two events? One of Olga’s recent studies tested this question by having test subjects guess how long a radio story took to listen to. It turns out, the more events happened between two parts of a story, the more time people guessed that part of the story took. The difference between perception and reality was sometimes huge, with some estimating as much as five minutes too long (on an interval of only three minutes!).
Thanks to Olga for the coherent and descriptive interview! It’s really clear to me how much insight we have about the brain now, even if neuroscience has a long way to go before we get to concrete psychological answers.
Afterwards, Stevie comes on to clear up two science news stories. A recent finding from the University of Texans examines the death of Lucy, our oldest ancestor found in 1974. By using X-rays and forensic techniques on Lucy’s skeleton, they could tell precisely the injuries that ended her prehistoric life: falling from a tree and hitting the ground at 35 miles per hour. Ironically, the very skills we thank Lucy’s species for developing, walking on the ground and not in trees, may have made them less adept at climbing (and more likely to succumb by gravity).
The second story, spreading around the internet like wildfire, concerns a recent signal from a Russian Academy of Science telescope. The signal comes from nearby, only 95 light years away, and some claim it could come from an extraterrestrial civilization. But it’s wise to wait for more facts to come in: SETI is only just opening their investigation, and there’s a lot of reasons that this signal is probably a false alarm. For one, seeing a signal once is a lot more indicative of a malfunctioning satellite than a repeated broadcast by aliens. More than likely, this is just one more reason to remain skeptical of most things you read on the internet.
As a great show-closer, Ingrid Ockert returns to our show for yet another book review. This time, she brings us Engineers for Change, an investigation by Matthew Wisnioski into the changing perception of engineering over the 1960s. Even today, we often think of the engineer as a cog in the machine–more at home designing missile silos than solving climate change. But various groups of engineers have tried changing this image for the better over time. Even at Princeton, over a thousand students led by Engineering professor Steven Slaby protested University research for weapons in the Vietnam War. For more information on collaborations between engineering and art, Patrick McCray’s blog Leaping Robot has put out some great articles recently. As always, thanks to Ingrid for coming on the show, and especially for bringing us another alternative look into the perception of science and technology over the past few decades.
The full playlist is on WPRB.com and below.