Can sound help us understand the complex patterns in our universe? This question leads Nate to Symbolic Sound in Champaign, Illinois, where composer Carla Scaletti guides him on a journey where sound, music, and data intertwine in captivating and thought-provoking ways. Along the way, we’ll meet Kimberly Arcand, who unveils the hidden melodies of space through her celestial soundscapes, delve into the fascinating world of protein folding with Martin Gruebele, and listen to the delicate dance of DNA with Mark Temple.
This episode was inspired by a fantastic LA Times article entitled The Sounds of Science by Sumeet Kulkarni.
Kimberly Arcand is a visualization scientist and emerging technology lead at NASA’s Chandra X-ray Observatory with the Center for Astrophysics, Harvard & Smithsonian.
Martin Gruebele is a professor at the University of Illinois where he leads the Gruebele Group.
Carla Scaletti is an experimental composer, designer of the Kyma sound design language and co-founder of Symbolic Sound Corporation.
Mark Temple is a Senior Lecturer in Molecular Biology in the School of Science at the Western Sydney University (WSU). Mark also created a web app that lets anyone plug and play their own DNA that’s been sequenced by a company such as 23andMe or Ancestry.com.
Connect with The Show About Science:
Instagram: https://www.instagram.com/showaboutscience
Facebook: https://www.facebook.com/theshowaboutscience
YouTube: https://www.youtube.com/showaboutscience
Twitter: https://www.twitter.com/natepodcasts
LinkedIn: https://www.linkedin.com/company/the-show-about-science/
Loved this episode? Leave us a review and rating wherever you listen to podcasts!




Transcript:
Introduction
Nate: Nate here, back for another episode of The Show About Science.
We’ve just entered a century old building in downtown Champaign, Illinois, and we’re taking the elevator to the top floor where things may start sounding a little bit different.
Hello everyone and welcome to another episode of The Show About Science. This is your host Nate and on today’s episode we’re going to be playing around with the – The Science of Sound.
– We’ve just exited the elevator and we’re about to arrive at our destination. So let’s meet our first guest for this episode, which I’m calling The Science of Sound.
Synthesizing Sound with Composer Carla Scaletti
So could you tell our listeners a little bit about yourself?
Carla Scaletti: – My name’s Carla Scaletti. I’m a composer of electronic music. And since I was a kid, I’ve been also fascinated with science and trying to understand the patterns in the universe and decided really as a kid that I wanted to try to use sound to help people understand or recognize or identify patterns in the universe.
Nate: Okay, and Carla, where are we?
[laughing]
Carla Scaletti: – That is a deep question.
Nate: [laughing]
– We are on the Earth.
[laughing]
Carla Scaletti: – So this is Symbolic Sound.
Kurt Hebel and I started this company to make Kyma, which is software and hardware for making sound.
Nate: Kyma makes all sorts of interesting and unusual sounds.
Sounds that crackle and pop.
Sounds made from processing voices or instruments.
Carla Scaletti: People very often use Kyma in a live performance.
Nate: Combining all of these sounds to create musical compositions.
Carla Scaletti: Computer music, computer generated music, but on stage with a live element to it.
Nate: It’s like a kind of improv sort of music.
Carla Scaletti: Some people improvise, other people use scores. So it runs the full range of all different types of music.
Nate: But Kyma can be used for more than just music.
Carla Scaletti: – It’s been used for films like WALL-E.
WALL-E: – WALL-E.
Carla Scaletti: – The voice of WALL-E was Ben Burtt.
Eve: – WALL-E.
Carla Scaletti: – He used his own voice, and it was just supposed to be a temporary test. And he was just watching the picture and moving his hand up and down to make it expressive instead of just mechanically generating it. And in the end, they liked it so much, they used his voice.
Nate: Wow.
Wow.
Sound Design
So how do sound designers like Ben Burtt use Kyma to create sounds like WALL-E?
Carla Scaletti: This is how somebody using Kyma would design a new sound.
Nate: It starts with a single module that generates a sound.
Then you connect those modules to others that process or modify the sound.
Carla Scaletti: So it’s sort of a signal flow diagram.
Nate: And each module in our signal flow diagram represents one block of code.
Carla Scaletti: It’s written in a language called Smalltalk.
And the idea…
Nate: How’s your day going?
And as we send the audio from one block to the next, the code modifies or changes the sound.
Carla Scaletti: Anywhere along this path, you can click on it and listen to what it sounds like.
And you’ll hear how it’s changing as it goes through this path.
Okay, so the first one, this is an oscillator.
It’s kind of a model of the glottis inside your throat.
Nate: As a part of the glottis, our vocal cords vibrate, or oscillate, when we talk or sing.
And the speed of those vibrations determines the pitch.
Carla Scaletti: So it’s going up and down depending on how fast it’s vibrating.
Nate: That’s the first block on our signal flow diagram.
The next block simulates your vocal tract.
Carla Scaletti: So it’s a model of how your mouth is shaped and the sinuses or whatever, the different kinds of environment that the glottal pulse goes through.
Nate: And when we change the shape of our mouth, we change the sound that comes out of it too.
Carla Scaletti: It’s going to make something that sounds like different vowels.
Nate: So if we connect an iPad to Kyma and then we move our finger up and down the screen,
It simulates what happens when we open and close our mouth.
So with the module, we are filtering the sound.
And the position of our finger is the data that Kyma uses to change the parameters of the filter.
Sort of like what Ben Burtt was doing when he used his own voice to create WALL-E.
Carla Scaletti: So it sounds kind of like the “ee-ah.”
Nate: Yeah, it almost sounds like chanting.
Carla Scaletti: Yeah.
Nate: Yeah.
And we can use the iPad to send Kyma even more data. Instead of having just one oscillator, we can have four. One for each finger.
Carla Scaletti: So this is the same sound, but now it made four copies. And the frequency of the glottal pulse is controlled left and right. And the vowel, the filtering is up and down.
Nate: Okay, here’s where things start getting interesting.
Computer audio is really just data, which means that we can listen to data.
Carla Scaletti: So everything’s data, actually.
You know, like moving your finger on the iPad, you’re sending a stream of data to Kyma to change the parameters.
Nate: And your voice? That’s data too.
Carla Scaletti: When you’re speaking to a microphone and you go through the analog to digital converter, you’re getting a stream of numbers and you can do arithmetic on those numbers and modify it.
Listening To Data
Nate: And if we can hear data, then can we listen to something like DNA?
As it turns out, we can and we are. Right now you’re listening to a DNA sequence that molecular biologist Mark Temple converted into sound.
Mark Temple: So what I thought I’d do, I thought I’d look at DNA from the perspective of a protein.
Nate: That protein is kind of like a musician performing a musical score, comprised of all the A’s, the T’s, the G’s, and C’s in a gene.
And the “score” we’ve been listening to is the DNA sequence of the HTT gene responsible for Huntington’s disease.
Mark Temple: I chose that gene because Huntington’s is caused by a repetitive sequence within a DNA sequence.
So those repetitive sequences are quite easy to hear.
You could hear like this random pattern and then it switches to this da-da-da-da-da-da-da pattern and that’s the repeat sequence that’s causing the disease.

What we’re listening to here is the sort of downtown of our Milky Way galaxy, the inner region around the supermassive black hole called Sagittarius A*
Kimberly Arcand
NASA’s Chandra X-ray Observatory at the Center for Astrophysics,
Harvard and Smithsonian.
Nate: We can also listen to data collected from space.
Kimberly Arcand: What we’re listening to here is the sort of downtown of our Milky Way galaxy, the inner region around the supermassive black hole called Sagittarius A*.
Nate: This is Kimberly Arcand.
Kimberly Arcand: Dr. Kimberly Arcand of NASA’s Chandra X-ray Observatory at the Center for Astrophysics, Harvard and Smithsonian.
Nate: And what you’re listening to is light wave data collected by NASA.
Kimberly Arcand: The infrared light, the optical light, and the X-ray light from the Spitzer, Hubble, and Chandra observatories in space.
Nate: And these light waves are outside of the range that we can normally see with our eyes. Dr. Arcand says that what the human eye can see is just a tiny, tiny sliver of what is out there in the universe.
Kimberly Arcand: It’s like the middle C key on a piano and just a couple of keys on either side.
Nate: This is what’s at the heart of something called data sonification, being able to hear things that we might not be able to see.
Think of it like a chart or a graph, but for your ears.
The Sonification Of Protein Folding
Which brings us to Martin.
Martin Gruebele: Sure. My name is Martin Gruebele, and my research covers a whole bunch of different areas.
We do some quantum mechanics kind of stuff, schrodinger’s cat, nanoparticles, all of those good things.
Nate: And his lab at the University of Illinois also studies proteins, specifically how they fold up.
So why do you really need to map these things?
Like why would you need to know when the protein was folding?
Martin Gruebele: So we want to understand whether proteins are folded or not because if they’re not folded they can’t do their job. Your cells are filled with proteins and one of the first things that happens in your cells is that these proteins fold up and make very compact structures and then they become enzymes. And these enzymes, you know, they run chemical reactions in your body. They make, for instance, ATP, which is a molecule that allows you to move around and talk or do anything. No ATP, no moving around, no doing anything. And so scientists really want to understand how proteins do this because they do it by themselves.
Nate: In order for the protein to do its job, it first has to fold up into the shape or structure that allows it to do its job.
And in order to do this, we need to talk about hydrogen bonds.
Martin Gruebele: And the reason for this is that the protein needs to detach from the water molecules and makes hydrogen bonds with itself instead of the water.
That’s why we’re so interested in sounds that allow you to understand how hydrogen bonds form or bonds to the water form.
Carla Scaletti: So the idea was, I don’t know if you’ve ever played a video game where there’s a simulated world kind of, and when there’s an impact where two things hit each other, you could trigger a sound when that happens because you know in the model world, those two things came close so you go, “click” and it would make a sound.
So we were trying to do the same thing here. Like when the things are close enough that there could be a hydrogen bond, we make a “blip.”
Nate: So you gathered all this data on what these proteins and these molecules are doing and then for each way they interacted with each other you assigned a certain pitch or a certain sound.
Martin Gruebele: And then there’s many of these happening at once and just like you can listen to music and hear the symphony orchestra, the violins, the oboes, you can hear all of that.
You can hear all of it at once when you listen to it.
Whereas when you look at it, it just looks like a cacophony of colors or things moving around, it’s very hard to distinguish.
Nate: But when you assign a sound to each time a hydrogen bond forms, then you can clearly find when the protein is unfolded versus folded. And that’s what we’re hearing right now.
Martin Gruebele: You can tell there’s something different going on right there.
Nate: Oh!
Martin Gruebele: And you can tell there’s definitely something very different going on there.
Nate: – Then it changes again.
Martin Gruebele: – And so it goes back and forth between things that sound siren-y, things that sound like birds chirping, and that’s because you form different kinds of combinations of these bonds as the protein unfolds, or in this case, you’re breaking them as the protein unfolds. And it turns out, if you want to sort of listen to massive amounts of this kind of data for different proteins, because we run the simulation many times, the protein folds dozens of times, you know, we listen to it over and over again, it’s much easier to find patterns like that by listening, going, ah, the bird chirp happened in five of these proteins and the other one happened in seven of those proteins.
Nate: – So when we hear a specific sound, we know something important is happening.
Carla Scaletti: – I think of this almost like a Geiger counter.
Like you can hear the density of how many bonds are forming, but it’s many Geiger counters.
And each different type of bond is at a different frequency.
So you can monitor several of them at once.
And Martin’s really good at just like listening to that and picking out which bonds are forming at what time.
Nate: – And so about a Geiger counter, was that like an early form of what we’re doing now?
Martin Gruebele: – So it’s a device that responds to radioactive decay by making basically a blip tone every time there is a radioactive decay. And you know it makes that crackling noise when you hold it closer to the radioactive source, the crackling increases.
And that really is a sonification.
It tells you how many decays per second are happening. And so the intensity where it’s going click, click, click, or clack, clack, clack, you can immediately tell that you have higher radioactivity.
And of course, Carla does a lot of this purely by computer processing. Back in those days, it was all analog equipment, where you had to wire stuff together. So a lot of stuff can get done digitally now on a computer in a very automated way. Like that little program that she showed you at the very beginning, I mean, it’s really different modules like a filter, a sine wave generator, things like this that you stick together to make the sound.
And in the old days, you really would have had like a sine wave box and a filter box and various other boxes that you connect with cables. And so she just connects the cables virtually on the computer.
Nate: – and so this work all revolves around better visualizing or well-
Martin Gruebele: – or better hearing.
Nate: – Or better hearing proteins.
And so how can this help us understand what proteins do and sometimes when they go wrong, so to speak?
Martin Gruebele: – So let me give you an example.
So when you make mutations in a protein, you substitute an amino acid by another one, you might get patterns that are not quite right and the protein might malfunction.
So for instance, if this happens to a protein called p53 in your cells, you’ll get cancer. And that would be an unpleasant consequence. And so one of the things by really understanding how both the ordered parts of these proteins and the more disordered parts of these proteins, how they really interact, and in part doing it by visualization or by doing it by sonification, we can actually learn, for instance, how certain mutations are the proteins more sensitive to them.
Or we can learn things like, well, if there is a mutation here, and we can’t really change that, we’re not going to genetically engineer you afterwards. I suppose someday we could, but we don’t want to, that’s a whole different topic.
Nate: And that’s a story for another time then.
Martin Gruebele: Yeah, exactly.
But we could maybe give you a small drug molecule that actually could bind to the right place on the protein and correct that error from the mutation.
The Internal Brain State Of The Musician
Nate: Data sonification is helping us better understand proteins.
But what’s happening here at Symbolic Sound is helping unearth an even bigger understanding.
Carla Scaletti: So I’ve come to think of music as being a sonification. And then the question is, a sonification of what? And I’ve come to think that it is a sonification of the internal state of the composer’s mind.
Because when you think about consciousness, what is consciousness? I mean, how do you experience it?
Nate: – well, it is experience.
Carla Scaletti: – It’s like a flow of experience.
So sometimes things are moving fast and energetically, sometimes things are boring and they’re moving slowly, like not very much is happening. Sometimes lots of things are happening. Sometimes things are almost like violently coming toward you and sometimes things are peaceful.
And that’s a description of music.
Sometimes it’s moving fast, sometimes it’s moving more slowly, sometimes it’s very visceral, percussive things almost hitting you. So I think when you listen to music, it is like mind melding with the person making that music.
Nate: – It’s like what Carla said earlier, everything is data, music is data.
Martin Gruebele: – Yeah, exactly, think of it in a sense similar, except that it’s your actual brain state that gets associated with the music. But your brain’s in your, what does a painter do? you’re seeing their brain state in an image basically, right? Because an abstract painting is certainly not some obvious rendition of something out there.
It’s reflective of the internal state of the person who painted it, just like music is reflective of the internal state, just like these sonifications are reflective of the internal state of the protein. And proteins are small, simple enough things that that is something we can already understand by science, whereas you know, neurobiology is still working hard and trying to understand brain states.
[MUSIC PLAYING]
Nate: Like Carla, Mark Temple’s thinking about music has also evolved.
Mark Temple: It’s taken me down the path of taking this science audio into a space where musicians can interact with the science data.
Nate: And that’s what he did. He started interacting with his DNA sonifications.
Mark Temple: I started playing drums to my science data, which sounds a bit odd,
but the intersection of art and science I thought was really interesting.
Nate: – Then he started bringing in other musicians to perform with him. What you’re hearing right now is a piece based off of the DNA sequence of Myrtle Rust.
Mark Temple: – Which is a fungus that’s causing devastation in Australian native plants. So this fungus was introduced into Australia ten years ago and it’s currently spreading up and down the east coast of Australia. Thousands of kilometers of coastline is now being infected by the Myrtle Rust. So I thought that would be a good sequence to look at and I think the sonification in the context of science is really important to use our ears to recognize patterns in DNA but I also have proven to myself that this can sound musical and then once I’m in the musical space I can then apply the other part of my brain which is the creative musical side to start making music out of these sequences and that’s been really fascinating.
Nate: What’s even more fascinating is in this case Mark is mind-melding with a composer and that composer just happens to be the building blocks of life itself.
[MUSIC PLAYING]
Nate: If you’ve enjoyed the music featured in this episode, you’re in luck.
You can watch Mark performing this song on our website, theshowaboutscience.com, and we’ve also included a link in the description. Trust us, it’s very cool and definitely something worth watching.
But wait, there’s more!
If you’re curious about what your DNA sounds like, Mark has also created an app that allows you to listen to your 23andMe or Ancestry.com dNA data.
We’ve also put that link in the show notes.
A special thank you to Kimberly, Carla, Martin, and Symbolic Sound in Champaign, Illinois. They graciously let us visit and even shared Carla’s musical compositions for this episode.
Additional thanks to Gopika Gopan for letting us tour the Gruebele Lab on a Friday afternoon.
And we can’t forget Sumeet Kulkarni who wrote an excellent article for the LA Times that inspired this episode. We’ll include a link to that article in the show notes as well.
Okay, there you have it folks. The Show About Science is complete. Additional music for this episode comes from EpidemicSound.com and our theme song was written by Jeff, Dan, and Theresa Brooks.
Make sure to subscribe to The Show About Science wherever you get your podcasts. Okay dad, you can shut the recording off.
Sounds like I’m being powered off.


Leave a reply to Science of Sound » the eighth nerve Cancel reply