Featured Image: A wildebeest migration in Masai Mara National Park, Kenya. Shown are billions of the most pivotal parts of the grassland ecosystem: pathogens, thriving inside each animal.
Today we hosted pathogen ecologist Dr. Andy Dobson, of Princeton’s Department of Ecology and Evolutionary Biology. As an expert on parasites all over the natural world, he studies everything from pathogen evolution to their drastic effects on ecosystems – and it blew our show away. We cover the detriment of a clean gut, how the rinderpest virus devastated the Serengeti, how tourists track wolves with mange, and the dangers of antibiotic resistance. Be prepared to reimagine the place of the invisible pathogen within every ecosystem. And, as always, we dish out some science news (Rosetta’s death, earthquakes in Oklahoma, and spiderweb metamaterials) alongside a brief discussion on rational numbers.
In this installment of These Vibes, we welcomed Joseph Amon, visiting lecturer at the Woodrow Wilson School here at Princeton and Vice President for neglected tropical diseases at Helen Keller International, on human rights, the rights to health and education and their interdependence, and neglected tropical diseases. Later in the interview he describes his path, which takes us in to a discussion on the different approaches to addressing human rights deficiencies.
First hour: Science news and a survey of the science research being done by astronauts on the International Space Station (ISS).
Second hour (56 minutes in): Interview with Joseph Amon. Interview-only recording below.
Featured image: The interstage of a Saturn V rocket falls into the Atlantic, detaching to save on mass and enable further travel in space. Taken on the Apollo 6 mission by NASA.
We welcome Charles Swanson, Princeton PhD candidate in plasma physics, back to the show for a journey into the science of rockets: how expensive is it to travel around our
solar system? What makes rockets with high exhaust velocity better than high-thrust rockets? How hard is it to go to Mars? Also featuring Adam Sliwinski of So Percussion on being an ensemble-in-residence and making music out of cacti, the westerly winds of ancient Tibet, and the life cycles of stars.
In this episode, we brought Cameron Ellis back in the studio. Cameron is a graduate researcher in Cognitive Neuroscience in the Turk Browne lab here at Princeton, whose research focuses on consciousness and mental processing. We first talked to Cameron back in May (2016 – in that show he walked us through the nitty gritty of his research, as well as the fascinating history of the study of consciousness as a scientific discipline and the important research that has had a profound effect on peoples’ lives.
Here’s a short, incomplete list of the topics we discuss in the show:
What does the term consciousness even mean? If we’re going to talk about it, we need to be able to define it. Or perhaps is the study of consciousness our attempt to, in fact, scramble for a definition?
What is the idea of qualia? and why is it important to the discourse on consciousness? That brought us to the discussion of the Mary’s room (aka the Knowledge Argument) – roughly, does experience add anything if you already know everything on a topic – thought experiment and the Inverted Spectrum (is what you see as green what I see as green? how could we know?).
Shortly after, we discussed the Chinese Room thought experiment (could a computer be conscious?), language learning, and Strong AI.
In the last part of the interview Cameron explains the concept of uploading consciousness and the Simulation Hypothesis (that our universe is actually a simulation within the computer of another universe – no but really though).
At the very end of the show, Brian jumps on the mic to discuss a recent New Yorker article on so-called super-recognizers, and a new squad of them in the London police force. Super recognizers are individuals who are incredibly skilled at facial recognition. This may sound strange, but we all know people (and may even be this way ourself) who are terrible at recognizing faces – people with something called face blindness – so it makes sense that there are individuals on the other end of the spectrum, those that are extremely attuned at recognizing an individual, even as they’re trawling through the thousands of faces in a CCTV video searching for that serial lawbreaker.
Featured Image: An artificial neural network, one of our best computational tools for uncovering the circuitry of our brains. Courtesy Wikimedia Commons.
We’re jam packed with science on WRPB this week! Olga Lositsky, graduate student in Princeton’s Neuroscience Institute, graced our show with her thorough understanding of today’s biggest research questions on the brain. We broke down her work on how we make decisions and store memories, which involves both computer models of neurons and psychological experiments. Later in the show, Stevie brings us details on Lucy’s demise and signals that probably aren’t from aliens–and Ingrid Ockert closes it out with her book review on engineers as activists.
Olga began studying neuroscience because its central questions unite other areas of science: psychology, philosophy, and biology come together once we learn how signals propagate around the brain. In fact, scientists categorize neuroscience research into levels of “abstraction.” At the most fundamental level, we can study the machinery of one neuron firing and affecting another; larger than that, we examine groups of neurons that make circuits for more complicated tasks; and finally, looking at the brain as a whole, researchers watch how signals travel from one part of the brain to another with tools like EEG or fMRI, giving a more global understanding. All of these levels pose questions differently, and they often use different terminology–but if we don’t understand each part alone, we’ll never grasp how groups of single neurons can make up a system as complicated as the human personality.
To connect these sub-fields of neuroscience, Olga uses computational modeling as an important tool. With machine learning techniques (which we’ve discussed on this show before), small units called nodes can pass information from layer to layer in a computer program. In the end, the program takes an input and provides an output, just like our brain sees some input and can think of some memory or move a muscle as an output. By changing the architecture of the neural network and observing its behavior, we get clues about what algorithms the brain uses to learn and remember.
Other technologies have been instrumental in unveiling the brain’s inner workings. Making better robots means programming them to learn, and the codes that artificial intelligence engineers use here have given insight into how the brain processes information. Robots make predictions about the way the world works and have to correct themselves when their predictions are wrong–and now, we think that dopamine might have something to do with the way our brain deals with our incorrect predictions.
In fact, the way we learn about general rules might be different than the way we learn about exceptions, or things about the world that surprise us. Some scientists are postulating that we connect these two ways of learning through sleep. For example, learning about birds that can’t fly (like penguins and ostriches) shouldn’t interfere with our understanding that most birds do fly–and maybe sleeping helps us reconcile these exceptions to the rule. But Olga emphasizes that we’re still only developing these ideas, and the new field of neuroscience has a long way to go until we answer these questions definitively.
One fundamental area that we need to learn more about, Olga points out, is exactly what happens within a single neuron when we learn. Maybe storing a memory is more than just building connections between neurons; it might also change the structure of each neuron individually. Such a new idea could have a huge impact on how we simulate the brain on a computer.
In the second half of the interview, we pick apart a few dramatic ideas that are being debated in neuroscience today. Firstly: Why do we make bad decisions? Whether it’s eating another candy bar or refusing to do our homework, humans might seem wired to choose in ways that harm ourselves in the long run. In fact, though, there’s a better question to ask that can tell us why we choose these self-defeating things. Evolutionarily, why might it be optimal for us to make this “bad” choice? For example, a candy bar might have been a great way for a starving hunter-gatherer to shore up calories for the winter; avoiding homework might allow us to go on an adventure and learn about the world. There are evolutionary reasons why we make decisions we ought not to, which can help us understand them.
Secondly, we all know that neurons that “fire together wire together;” that is, the connection between two neurons strengthens as they get used simultaneously. Actually, this might not always be the case. Olga describes how subliminal reminders that fire neurons weakly might actually weaken the connection between them, basically helping you to forget. Again, this is new and preliminary research, but it makes us consider an even more complicated picture of the brain.
Finally, how do we estimate how much time passed between two events? One of Olga’s recent studies tested this question by having test subjects guess how long a radio story took to listen to. It turns out, the more events happened between two parts of a story, the more time people guessed that part of the story took. The difference between perception and reality was sometimes huge, with some estimating as much as five minutes too long (on an interval of only three minutes!).
Thanks to Olga for the coherent and descriptive interview! It’s really clear to me how much insight we have about the brain now, even if neuroscience has a long way to go before we get to concrete psychological answers.
Afterwards, Stevie comes on to clear up two science news stories. A recent finding from the University of Texans examines the death of Lucy, our oldest ancestor found in 1974. By using X-rays and forensic techniques on Lucy’s skeleton, they could tell precisely the injuries that ended her prehistoric life: falling from a tree and hitting the ground at 35 miles per hour. Ironically, the very skills we thank Lucy’s species for developing, walking on the ground and not in trees, may have made them less adept at climbing (and more likely to succumb by gravity).
The second story, spreading around the internet like wildfire, concerns a recent signal from a Russian Academy of Science telescope. The signal comes from nearby, only 95 light years away, and some claim it could come from an extraterrestrial civilization. But it’s wise to wait for more facts to come in: SETI is only just opening their investigation, and there’s a lot of reasons that this signal is probably a false alarm. For one, seeing a signal once is a lot more indicative of a malfunctioning satellite than a repeated broadcast by aliens. More than likely, this is just one more reason to remain skeptical of most things you read on the internet.
As a great show-closer, Ingrid Ockert returns to our show for yet another book review. This time, she brings us Engineers for Change, an investigation by Matthew Wisnioski into the changing perception of engineering over the 1960s. Even today, we often think of the engineer as a cog in the machine–more at home designing missile silos than solving climate change. But various groups of engineers have tried changing this image for the better over time. Even at Princeton, over a thousand students led by Engineering professor Steven Slaby protested University research for weapons in the Vietnam War. For more information on collaborations between engineering and art, Patrick McCray’s blog Leaping Robot has put out some great articles recently. As always, thanks to Ingrid for coming on the show, and especially for bringing us another alternative look into the perception of science and technology over the past few decades.
This week on These Vibes, Stevie interviewed Matt Grobis, graduate researcher in the department of Ecology and Evolutionary Biology here at Princeton. Matt is also director and co-founder of Princeton Open Labs which organizes science outreach talks and activities for local schools, and writes for a couple blogs: Highwire Earth, an interdisciplinary blog on sustainable development in our changing and growing society where he’s managing editor and co-founder (Matt and Julio Herrera Estrada, the fellow founder of Highwire, came on TVR2C previously for a short segment where they discussed the site) and The Headbanging Behaviorist, which mixes science, activism, and music (so he fits right in here at These Vibes Are Too Cosmic).
We began our discussion with some of the things animals can do together that they cannot especially do alone. Examples of these are migration and predator evasion. For example, the fish shiners prefer to stay in shadows because that will protect them from predators lurking above, but – as Matt discusses in the show – they can’t see the gradients in light well, and thus have difficulty find the shadows unless they’re in a group. Individuals could measure the light level where they were and would change their speed to match it, but they couldn’t actively move to darker areas, so they’re much more likely to be snapped up by a predator. (Here’s the study that found this.)
Grobis conducts his research both in a lab with actual schools of minnows in a tank and cameras recording their movement (he even has some fake predator concoction to scare the fish), as well as “theoretically” – read: with computer models (like this interesting
agent-based model he mentions in the show). Matt’s lab research measures what’s called the “startle.” This is the wave that passes through a school of minnows, for example, when they are, well, startled. In the show Matt also calls this a “cascade.” (Here’s the original paper on startles , also featured in Cell, that Matt’s research is building on.) Matt is seeing if the mechanisms by which the cascade spreads hold up when there’s elevated perception of risk in the environment. Preliminary results indicate that under increased perception of risk, startles might spread a bit differently!
As an example of interesting group behavior, Matt later discussed a specific study (“Uninformed individuals promote democratic consensus in animal groups”, Couzin et al. 2011) that was done with schools of fish. In this experiment the group cannot break apart, but part of the group wants to go towards a blue stimulus and another part really wants to go towards yellow – the behavior that emerges is interesting and seems very relevant to human situations we get in to all the time. (Choosing a dinner place in a big group, anyone?) You can take a look at the study here, and read a blog entry in Headbanging Behaviorist where Matt discusses what happened behind the scenes (a kind of “making of” of the study – this will be much more accessible than reading the paper itself).
After the interview, Matt noted that “one of the reasons Couzin et al. 2011 is so cool is that they started with the models and found the results in that theoretical universe on their computers. Then, they really hammered it home by showing it’s true in the real world too. So it’s more a good example of the power of combining theoretical models with experiments.” How cool!
In the show we received some excellent listener questions. One listener asked whether Matt’s research on the behaviors of groups could be used to control humans. From this we determined that maybe “control” was a bit strong, but that perhaps this group research could help us better guide traffic, be it in a street or a busy transit hub like an airport. Remember, “ants don’t have traffic jams.”
(In this part Matt mentioned research on autonomous robots that his adviser Iain Couzin is working on. It’s sponsored by the Office of Naval Research and is shared with Mechanical and Aerospace Engineering Professor Naomi Leonard.)
If you live in the Princeton area, and especially if you have school-aged children, please check out Matt Grobis’s side project Open Labs!
Featured image: A tomato hornworm being devoured, “casually,” by wasp larvae in their cocoons. Courtesy Wikimedia Foundation and the penultimate chapter of Miss Jane.
We were fortunate this week to air a phenomenal interview with author Brad Watson, Professor of Creative Writing at University of Wyoming and acclaimed novelist with two short-story collections and two books. His newest work, Miss Jane, just came out in July 2016, so we took the opportunity to ask Brad about the writing process and how he came to think of the world from Jane’s perspective. The conversation meanders through questions of gender identity, nature and Southernness, and feeling like the odd one out–it’s a thoroughly fascinating talk, so listen to the audio above and don’t just take my word for it.
The novel centers around Jane Chisolm, born on a cattle farm in 1915 Mississippi. From her first hours, Jane is defined by a birth defect: it leaves her incontinent and incapable of sex. Modern surgical technology could remedy a condition like this immediately. But in her day and age, Jane is left without recourse. The novel captures its heroine’s full arc, and over its course Brad explores the many consequences of Jane’s affliction.
A character like Jane is hard to relate to, especially for an author writing a century later with little to go off of but a childhood in the South. The story’s inspiration comes through a great-aunt, a mysterious figure that Brad only met once and knew mostly through old photos. Because of the lack of information, the novel took 13 years to write, only beginning seriously in 2013 when Brad connected his great-aunt’s story with a plausible medical condition that made her feel more concrete.
Even then, Brad couldn’t get a good look at who Jane might have been as a person without developing the story’s supporting characters. A small cast of dynamic personalities, including Jane’s nuclear family and the doctor that treats her, bolster the novel and give Brad different lenses into seeing Jane. He makes a point that characters shouldn’t be written into a story unless they help the reader understand the protagonist–and in this sparse collection of characters, Brad’s writing makes everyone seem like a piece of the puzzle, not just illuminating Jane but giving shape to the novel’s central conundrums.
The writing stands out for its perceptive descriptions of the natural world. Jane finds solace in the Southern forest near her home, where Brad remarks that everything is strange if you look hard enough: from mushrooms in the soil to fish that sift water through their gills to breathe. To a character that feels like an outsider in the human world, the oddities of wilderness are a comfort.
We talk a while about the strangeness of the South, too. It’s a place Brad doesn’t think
he’ll be able to get over, even now that he lives in Wyoming and only visits his childhood home occasionally. More than anywhere else in the US, the South maintains its own mentality, and the roots of it are deeply twisted around a history that Southerners spend their lives trying to process. Brad doubts he can stop writing about the region, since he has such a backlog of stories it has inspired.
On my mind as I read Miss Jane was the plot’s intricate connections with the American dialogue on gender identity. Brad clarifies that he began the novel years before this debate became mainstream, though he did wonder about Jane’s possible intersexuality in the course of defining her as a character. In the end, he writes Jane as a heterosexual female–which is fitting for the times, since 1920s Mississippian culture had no notion of the gender spectrum. Still, the foil between Miss Jane and our modern conversation is an important one, since Jane’s life was severely affected by a lack of medical technology that nowadays gives us the power to perform, say, sex reassignment surgeries.
I can’t recommend this book highly enough–not only is it an entertaining and beautiful read, but the wholeness which Brad builds into his characters is obvious from the start. For more information on the rest of his book tour or on Miss Jane, visit Brad’s website here.
Our show-closer comes from a listener who asked, semi-seriously, if the grass is truly always greener on the other side. Semi-seriously, we answer: the phrase came first from the Billy Jones tune above. Statistically, of course, your grass is probably about as green as everyone else’s, but Stevie brings us back to the real meaning of the phrase (comparing your well-being to others) and how it might explain Trump supporters.
Featured image: NASA’s Dawn mission, currently orbiting its second destination in the Asteroid Belt, is equipped with an ion thruster to boost its efficiency and make visiting multiple bodies possible. Courtesy NASA’s JPL.
Dr. Edgar Choueiri of Princeton’s Mechanical and Aerospace Engineering is on the air this week, and he brings his innovative physics applications to our conversation. Hear all about the dramatic Hall thruster technology as a method of space propulsion, and then get blown away by the idea of virtual-reality 3D sound. Throughout the interview, I had the feeling that science fiction was coming to life out of Edgar’s research, so check out the full recording to be really amazed at where technology is headed.
Edgar began his work at Princeton researching space propulsion. For many years, we’ve had a solution to this problem: chemical thrusters, which burn massive amounts of fuel to blast rockets up into space. However, it’s clear that this method is horribly inefficient. Just look at a typical Saturn 5 rocket, where a tiny payload sits on a massive container of fuel. All chemical thrusters work this way, since the amount of rocket fuel needed to lift a load out of Earth’s gravity is about ten times the mass of the load. Since combustion ejects particles at a particular speed of a few kilometers per second, we’re stuck with this inefficiency as long as we burn chemicals to get into space.
The most obvious way to improve this picture is by forcing particles out of a spaceship at higher speeds. We can achieve this acceleration by propelling the rocket with plasma, a charged gas that responds to electric fields. By making an electric field–which is easy to do with some solar panels and a metal grid–the spacecraft ejects plasma at any speed we like, which can drastically improve the thrust efficiency. Edgar makes an analogy of driving across the country: a chemical rocket is so inefficient that you need to stop for gas tens of times between New York and California, whereas a plasma thruster would let you go the whole way without refueling.
In some ways, we’re stuck with chemical rockets, because plasma engines aren’t good enough to get us out of Earth’s atmosphere. But once a spacecraft is in orbit, Edgar’s thrusters make the next steps cheaper and quicker. For example, a trip to Mars might take nine months with chemical fuel, but only three months with plasma fuel.
Edgar has seen a lot of progress in implementing these new technologies over the years. When he began graduate school, ion thrusters were science fiction; now they’re used widely by NASA and private companies. A newer design, the Hall thruster, uses clever arrangements of electromagnetic fields to keep particles confined and boost efficiency. And as Edgar’s group improves the Hall thruster design, it’s also seeing more use in space–perhaps an explosion in their use is coming, as Edgar hints at by mentioning SpaceX’s interest in the technology.
Aside from space propulsion, Edgar has another specialty that’s seeded a second laboratory at Princeton: 3D audio engineering. When we hear sounds, our brains can pinpoint their origin beneath our conscious awareness. An airplane overhead, a voice behind us… we could point to a sound’s source even if our eyes were closed. Unfortunately, reproduced sound from speakers or headphones has lost this spatial signature. To Edgar, hearing the breadth of a symphony confined to the location of a speaker isn’t authentic. That’s why he’s working to restore three-dimensionality to recorded audio.
Our ears can find a sound’s source from three cues. The first is the small delay between sounds reaching your right ear and your left ear, or the inter-aural time difference. Second is the loudness of sound in one ear compared to the other, or the inter-aural level difference. Lastly, the specific shape of your earlobes funnels sounds to your eardrums, and this personalized filter lets our brain know whether a sound is near or far, above or below.
Since the 1960s, we’ve mastered the first two cues, typically by recording sounds from two microphones on the sides of a dummy head. In fact, these “binaural” recordings are enough for about a third of the population: the inter-aural time difference and inter-aural level difference will convince them that sounds are happening in 3D. For the rest of us, though, the unique shapes of our own ears affects our spatial perception of sound. Making a recording that everyone will perceive as truly 3D means we need to record audio specially for each pair of ears. Further, your brain expects sound to move from right to left when we shake our heads: but recordings don’t move along with you. So, there are a lot of obstacles to perfecting 3D audio for everyone.
Edgar’s group is fighting off these remaining problems one at a time. One of his students, Joseph Tylka, makes facial recognition software to track head movements and modify audio playback in real time, so that the 3D experience is uninterrupted when you shift around. Another student, Rahulram Sridhar, is developing a method to tune 3D audio to your earlobes with quick image analysis. Finally, the group is working on sound wave cancellation, so that different areas in space would receive completely different soundwaves from the same set of speakers.
All this innovation sounds far fetched, but these projects are moving along quickly–and Edgar foresees a lot of short-term applications. Imagine four friends sitting in a car, all listening to the same sound system but all hearing different tracks individualized to their ears. Everyone can navigate through a virtual 3D sound field, listening to hyperrealistic concerts from the mezzanine or from behind the stage according to their wishes. If Dr. Choueiri’s lab succeeds, we could have sound systems like this in the very near future.
For more information on present-day technology for 3D sound, check out Jambox and LiveAudio, which Dr. Choueiri demonstrates during the interview.
Featured Image: These fMRI images from Jean Gotman at McGill highlight a small part of the brain, because it’s statistically more active than it was at some other moment. Remember: the other 90% of the brain is still actively in use!
For this week’s show, we replayed an earlier interview from Sam McDougle, a graduate student in the Department of Neuroscience at Princeton. He and Stevie dive into his research on motor skills, learning, and the brain, and they debunk the ever-popular myth that we only use 10% of our neurons at once. After the interview, I describe the Earth’s dynamo problem and some new research from Osaka University that raises more puzzles than it solves.
Sam and the rest of the Taylor Lab want to know what the brain is doing when we learn new skills. We’re all used to the feeling when something clicks and a skill becomes easy for us: riding a bike, cutting with scissors, and typing have all become automatic for us eventually. But initially, developing a new task into a skill requires practice. Building up muscle memory and neuronal networks that fit the task is a long process in the brain, and it’s hard for science to unveil completely. As Sam says, researchers rarely find conclusions, they just scramble for hints until they can piece together ideas about the truth.
To study the learning process, Sam first experimented on mice, fitting them with brain electrodes and having them repeat behaviors. But even though the brain fundamentals are very similar between mice and humans, the complicated tasks that people have to learn go beyond the range of mouse-science. Now, Sam brings in human volunteers to study instead, and relies on less invasive data collection (like fMRI instead of electrodes inside the skull).
To be clear, there’s a lot going on in the brain at once. An fMRI scan may just show a small blob of color somewhere in the brain, but this highlight just shows a place where the brain was more active than it used to be before the experiment. Sam’s brain scans might showcase, for example, neural areas that handle muscle memory; but even as these particular brain sections are especially active, the rest of the neurons are still abuzz. The myth that “we only use 10% of our brains” might have originated in Dale Carnegie’s famous self-help book, How to Win Friends and Influence People–and even there, it only appears as a misattributed quote in the foreword. So don’t believe the rumors, because your whole brain is always hard at work.
Testing hypotheses on human subjects is no easy matter. For one, Sam can’t have people learn nuanced skills in the experiment. To teach thirty subjects to play the violin and measure their brainwaves as they do it will take months, will cost a lot, and won’t be very repeatable from person to person. Instead, Sam and his colleagues have to think of tasks that are complicated enough that we must learn them, but are still so simple that the study can proceed quickly and repeatably.
One result of the studies so far is the demonstration of “implicit” versus “explicit” motor skill learning. Teaching someone to snowboard might involve phrases like “keep your weight on your back leg” or “dig in to brake;” these “explicit” instructions give the learner some reference for improving quickly. But, verbal commands alone can’t do the whole job, since most of the learning comes from testing behaviors out. Imagine reading a book about swimming and then jumping in the pool for the first time: you’d still have a lot to figure out about treading water. The implicit skills you develop from trying, failing, and trying again are ultimately the backbone of mastering a task. Sam’s results show that implicit and explicit learning are stored in different parts of the brain, and the research is still attempting to find connections between the two.
Sam closed the interview with a bit about his own musicianship. Among many projects, he plays and records his own music as Polly Hi. I anachronistically played a song from Deceleration, Sam’s newest album, which actually came out months after this interview last year.
I ended the show with a major problem in astrophysics and geology: why is the Earth still magnetized? From what we know about currents and electricity, magnetic fields should decay over time and eventually die out. However, the Earth has a strong magnetic field: it keeps us safe from the solar wind, helps maintain our atmosphere, and brings the auroras to the poles. So, something doesn’t add up. The Earth’s core must have some net current or flow that maintains the magnetic field over billions of years.
Scientists at Osaka University just did an experiment to simulate the Earth’s core and resolve this issue once and for all. The core is made of nickel and iron, so the group put iron wires into a diamond anvil–a device that generates gigantic pressures on a tiny area in a lab. By heating the wires with lasers and putting a current through them, the scientists measured how well iron can conduct a current in the middle of the Earth. Based on their measurements and how strong the Earth’s magnetic field is, the Earth should be about 700 million years old. The problem: it’s much, much older than that (over 4 billion years!).
What does this puzzling result tell us? Largely, it’s that there are properties of the Earth’s core that we don’t understand at all. You might think that by being space explorers and mastering fracking and plate tectonics, we’d have figured out the composition of the Earth by now. Unfortunately, while we do understand the crust and mantle of our planet, the nickel-iron core is inaccessible to direct measurement. We can study waves that pass through the middle of the Earth, and we can study high-temperature high-pressure materials in the lab, but understanding the complex motions at the middle of the Earth is probably a long way away.
(Audio note: Unfortunately, this week’s recording is not great. First, it starts a few minutes into the interview, so you miss the introduction; second, a recording issue distorted the quality. If anyone has a better-quality recording, let us know!)
Our guest this week was Dr. Andrea Graham, a professor in the Department of Ecology and Evolutionary Biology here at Princeton. She brought us her insight into the immune system, so we dove into the good and bad sides of the cells that usually keep us healthy. Toward the end, I talk briefly about the importance of sea ice algae in the Arctic regions, and how those Northern ecosystems might be in danger if the ice sheets shrink.
Andrea finds the immune system fascinating. It’s a decentralized system, with no one governing body, so it must deal with problems locally: each cell works independently. When white blood cells gang up on diseases in the body, they communicate their strategies by sending each other small protein messages. These cytokines let nearby white blood cells attack the bacteria in unison; they might also transmit messages globally through the blood stream to share antibody information with the rest of the body (this is how long-term immunization works).
The problem with decentralized systems is that they can overreact by all acting at once. If every white blood cell in the body reacts simultaneously to a new threat, the whole stream is flooded with cytokines. Your immune system’s overreaction causes fever and swelling–and sometimes even death. It’s hard to produce medicines that work against these cytokine storms, since there’s a delicate balance between stopping the immune system from self-harm and preventing it from fighting real diseases. Discovering a medicine that would slow the system gently is “the million dollar question,” Andrea says.
As with anything that has limited resources, the immune system competes with other systems in the body. It takes energy to fight nematodes and energy to reproduce, but sometimes there’s not enough energy to do both. Andrea’s group studied a sheep population, brought to a Scottish Isle centuries ago and left without predators since, to see if real groups of animals have individuals choosing between health and reproduction.
The big breakthrough came when the group found anti-correlations between sheep who reproduced often and sheep whose immune systems cleaned away parasites effectively. In fact, many sheep had healthy worm populations living in the gut, even though the worms were harmful and cost the individual resources. Trade-offs like this mean a great immune system isn’t always beneficial.
Another balance we discussed was having a weak immune system versus to an overly-strong one. White blood cells clump together to suffocate bacteria and other threats, but they can go overboard and clump up more dangerously. These groups of white blood cells block passageways over time, leading to illness. Lupus is one example of an autoimmune disease that acts in this way–and Andrea’s group is looking for correlations between being lupus-prone and having a strong immune system. By joining an existing collaboration that studied the population of Taiwan over time, the group found evidence for that correlation.
Finally, I ended the show with science news: algae living in the seasonal polar ice caps of the Arctic are crucial bedrock of the Northern food chain. The result comes out of this study from the Alfred Wegener Institute in Germany. Scientists drew fat samples from tens of species in the Arctic Ocean, tracing the composition of fats in the animals back to sea algae living in the ice. Evidently, 60-90% of the nutrition delivered to herbivores comes from this food source, which is worrying since the ice caps are quickly shrinking.
We’ll be back next week with more, better-recorded radio. Thanks for tuning in!