Featured Image: A wildebeest migration in Masai Mara National Park, Kenya. Shown are billions of the most pivotal parts of the grassland ecosystem: pathogens, thriving inside each animal.
Today we hosted pathogen ecologist Dr. Andy Dobson, of Princeton’s Department of Ecology and Evolutionary Biology. As an expert on parasites all over the natural world, he studies everything from pathogen evolution to their drastic effects on ecosystems – and it blew our show away. We cover the detriment of a clean gut, how the rinderpest virus devastated the Serengeti, how tourists track wolves with mange, and the dangers of antibiotic resistance. Be prepared to reimagine the place of the invisible pathogen within every ecosystem. And, as always, we dish out some science news (Rosetta’s death, earthquakes in Oklahoma, and spiderweb metamaterials) alongside a brief discussion on rational numbers.
In this installment of These Vibes, we welcomed Joseph Amon, visiting lecturer at the Woodrow Wilson School here at Princeton and Vice President for neglected tropical diseases at Helen Keller International, on human rights, the rights to health and education and their interdependence, and neglected tropical diseases. Later in the interview he describes his path, which takes us in to a discussion on the different approaches to addressing human rights deficiencies.
First hour: Science news and a survey of the science research being done by astronauts on the International Space Station (ISS).
Second hour (56 minutes in): Interview with Joseph Amon. Interview-only recording below.
Featured image: The interstage of a Saturn V rocket falls into the Atlantic, detaching to save on mass and enable further travel in space. Taken on the Apollo 6 mission by NASA.
We welcome Charles Swanson, Princeton PhD candidate in plasma physics, back to the show for a journey into the science of rockets: how expensive is it to travel around our
solar system? What makes rockets with high exhaust velocity better than high-thrust rockets? How hard is it to go to Mars? Also featuring Adam Sliwinski of So Percussion on being an ensemble-in-residence and making music out of cacti, the westerly winds of ancient Tibet, and the life cycles of stars.
In this episode, we brought Cameron Ellis back in the studio. Cameron is a graduate researcher in Cognitive Neuroscience in the Turk Browne lab here at Princeton, whose research focuses on consciousness and mental processing. We first talked to Cameron back in May (2016 – in that show he walked us through the nitty gritty of his research, as well as the fascinating history of the study of consciousness as a scientific discipline and the important research that has had a profound effect on peoples’ lives.
Here’s a short, incomplete list of the topics we discuss in the show:
What does the term consciousness even mean? If we’re going to talk about it, we need to be able to define it. Or perhaps is the study of consciousness our attempt to, in fact, scramble for a definition?
What is the idea of qualia? and why is it important to the discourse on consciousness? That brought us to the discussion of the Mary’s room (aka the Knowledge Argument) – roughly, does experience add anything if you already know everything on a topic – thought experiment and the Inverted Spectrum (is what you see as green what I see as green? how could we know?).
Shortly after, we discussed the Chinese Room thought experiment (could a computer be conscious?), language learning, and Strong AI.
In the last part of the interview Cameron explains the concept of uploading consciousness and the Simulation Hypothesis (that our universe is actually a simulation within the computer of another universe – no but really though).
At the very end of the show, Brian jumps on the mic to discuss a recent New Yorker article on so-called super-recognizers, and a new squad of them in the London police force. Super recognizers are individuals who are incredibly skilled at facial recognition. This may sound strange, but we all know people (and may even be this way ourself) who are terrible at recognizing faces – people with something called face blindness – so it makes sense that there are individuals on the other end of the spectrum, those that are extremely attuned at recognizing an individual, even as they’re trawling through the thousands of faces in a CCTV video searching for that serial lawbreaker.
Featured Image: An artificial neural network, one of our best computational tools for uncovering the circuitry of our brains. Courtesy Wikimedia Commons.
We’re jam packed with science on WRPB this week! Olga Lositsky, graduate student in Princeton’s Neuroscience Institute, graced our show with her thorough understanding of today’s biggest research questions on the brain. We broke down her work on how we make decisions and store memories, which involves both computer models of neurons and psychological experiments. Later in the show, Stevie brings us details on Lucy’s demise and signals that probably aren’t from aliens–and Ingrid Ockert closes it out with her book review on engineers as activists.
Olga began studying neuroscience because its central questions unite other areas of science: psychology, philosophy, and biology come together once we learn how signals propagate around the brain. In fact, scientists categorize neuroscience research into levels of “abstraction.” At the most fundamental level, we can study the machinery of one neuron firing and affecting another; larger than that, we examine groups of neurons that make circuits for more complicated tasks; and finally, looking at the brain as a whole, researchers watch how signals travel from one part of the brain to another with tools like EEG or fMRI, giving a more global understanding. All of these levels pose questions differently, and they often use different terminology–but if we don’t understand each part alone, we’ll never grasp how groups of single neurons can make up a system as complicated as the human personality.
To connect these sub-fields of neuroscience, Olga uses computational modeling as an important tool. With machine learning techniques (which we’ve discussed on this show before), small units called nodes can pass information from layer to layer in a computer program. In the end, the program takes an input and provides an output, just like our brain sees some input and can think of some memory or move a muscle as an output. By changing the architecture of the neural network and observing its behavior, we get clues about what algorithms the brain uses to learn and remember.
Other technologies have been instrumental in unveiling the brain’s inner workings. Making better robots means programming them to learn, and the codes that artificial intelligence engineers use here have given insight into how the brain processes information. Robots make predictions about the way the world works and have to correct themselves when their predictions are wrong–and now, we think that dopamine might have something to do with the way our brain deals with our incorrect predictions.
In fact, the way we learn about general rules might be different than the way we learn about exceptions, or things about the world that surprise us. Some scientists are postulating that we connect these two ways of learning through sleep. For example, learning about birds that can’t fly (like penguins and ostriches) shouldn’t interfere with our understanding that most birds do fly–and maybe sleeping helps us reconcile these exceptions to the rule. But Olga emphasizes that we’re still only developing these ideas, and the new field of neuroscience has a long way to go until we answer these questions definitively.
One fundamental area that we need to learn more about, Olga points out, is exactly what happens within a single neuron when we learn. Maybe storing a memory is more than just building connections between neurons; it might also change the structure of each neuron individually. Such a new idea could have a huge impact on how we simulate the brain on a computer.
In the second half of the interview, we pick apart a few dramatic ideas that are being debated in neuroscience today. Firstly: Why do we make bad decisions? Whether it’s eating another candy bar or refusing to do our homework, humans might seem wired to choose in ways that harm ourselves in the long run. In fact, though, there’s a better question to ask that can tell us why we choose these self-defeating things. Evolutionarily, why might it be optimal for us to make this “bad” choice? For example, a candy bar might have been a great way for a starving hunter-gatherer to shore up calories for the winter; avoiding homework might allow us to go on an adventure and learn about the world. There are evolutionary reasons why we make decisions we ought not to, which can help us understand them.
Secondly, we all know that neurons that “fire together wire together;” that is, the connection between two neurons strengthens as they get used simultaneously. Actually, this might not always be the case. Olga describes how subliminal reminders that fire neurons weakly might actually weaken the connection between them, basically helping you to forget. Again, this is new and preliminary research, but it makes us consider an even more complicated picture of the brain.
Finally, how do we estimate how much time passed between two events? One of Olga’s recent studies tested this question by having test subjects guess how long a radio story took to listen to. It turns out, the more events happened between two parts of a story, the more time people guessed that part of the story took. The difference between perception and reality was sometimes huge, with some estimating as much as five minutes too long (on an interval of only three minutes!).
Thanks to Olga for the coherent and descriptive interview! It’s really clear to me how much insight we have about the brain now, even if neuroscience has a long way to go before we get to concrete psychological answers.
Afterwards, Stevie comes on to clear up two science news stories. A recent finding from the University of Texans examines the death of Lucy, our oldest ancestor found in 1974. By using X-rays and forensic techniques on Lucy’s skeleton, they could tell precisely the injuries that ended her prehistoric life: falling from a tree and hitting the ground at 35 miles per hour. Ironically, the very skills we thank Lucy’s species for developing, walking on the ground and not in trees, may have made them less adept at climbing (and more likely to succumb by gravity).
The second story, spreading around the internet like wildfire, concerns a recent signal from a Russian Academy of Science telescope. The signal comes from nearby, only 95 light years away, and some claim it could come from an extraterrestrial civilization. But it’s wise to wait for more facts to come in: SETI is only just opening their investigation, and there’s a lot of reasons that this signal is probably a false alarm. For one, seeing a signal once is a lot more indicative of a malfunctioning satellite than a repeated broadcast by aliens. More than likely, this is just one more reason to remain skeptical of most things you read on the internet.
As a great show-closer, Ingrid Ockert returns to our show for yet another book review. This time, she brings us Engineers for Change, an investigation by Matthew Wisnioski into the changing perception of engineering over the 1960s. Even today, we often think of the engineer as a cog in the machine–more at home designing missile silos than solving climate change. But various groups of engineers have tried changing this image for the better over time. Even at Princeton, over a thousand students led by Engineering professor Steven Slaby protested University research for weapons in the Vietnam War. For more information on collaborations between engineering and art, Patrick McCray’s blog Leaping Robot has put out some great articles recently. As always, thanks to Ingrid for coming on the show, and especially for bringing us another alternative look into the perception of science and technology over the past few decades.