Scientists are turning data into sound to hear the whispers of the universe (and more)

Scientists Are Turning Data Into Sound to Listen to Whispers of the Universe (And More)

We often think of astronomy as a visual science with beautiful images of the universe. However, astronomers use a wide range of analysis tools beyond images to understand nature more deeply.

Data sonification is the process of converting data into sound. It has powerful applications in research, education and outreach, and enables blind and visually impaired communities to understand plots, images, and other data.

Its use as a tool in science is still in its early stages – but astronomy groups are moving forward.

In a paper published in Nature Astronomy, my colleagues and I discuss the current state of data sonification in astronomy and other fields, provide an overview of 100 sound-based projects, and explore its future directions.

cocktail party effect

Imagine this scene: You’re at a crowded party that’s noisy. You don’t know anyone and they’re all speaking a language you can’t understand – not good. Then you hear excerpts of conversation in your own language in a far corner. You focus on it and proceed to introduce yourself.

While you may have never experienced a party like this before, the idea of ​​hearing a recognizable voice or language in a noisy room is familiar. The ability of the human ear and brain to filter out unwanted sounds and retrieve desired sounds is called the “cocktail party effect”.

Similarly, science is always pushing the limits of what can be detected, which often requires extracting very weak signals from noisy data. In astronomy we often insist on finding the weakest, farthest or most fleeting signals. Data sonification helps us push these boundaries further.

The video below gives an example of how sonification can help researchers understand weak signals in data. It features the sonification of nine bursts from a repeating fast radio burst called FRB121102.

Rapid radio bursts are millisecond bursts of radio emission that can be detected halfway across the universe. We don’t know yet what causes them. Detecting them in other wavelengths is the key to understanding their nature.

abundance of good things

When we explore the universe with telescopes, we find that it is full of cataclysmic explosions, including supernova deaths of stars, mergers of black holes and neutron stars that produce gravitational waves, and rapid radio bursts.

These phenomena allow us to understand extreme physics at the highest known energies and densities. They help us measure the expansion rate of the universe and how much matter is in it, and to determine where and how the elements were created, among other things.

Upcoming facilities such as the Rubin Observatory and the Square Kilometer Array will detect millions of these events every night. We use computers and artificial intelligence to deal with a large number of detections.

However, most of these events are faint bursts, and computers are only so good at finding them. If the computer is given a template of the “desired” signal, it can detect a light burst. But if the signals diverge from this expected behavior, they are lost.

And often these events are the most interesting and provide the greatest insight into the nature of the universe. Using data sonification to verify these signals and identify outliers can be powerful.

more than meets the eye

Data sonification is useful for science interpretation because humans interpret audio information faster than visual information. In addition, the ear can perceive pitch levels higher than (and over a wider range) of color levels.

Another direction we are exploring for data sonification is multidimensional data analysis – which involves understanding the relationships between the many different features or properties in sound.

Plotting data in ten or more dimensions simultaneously is very complex, and very confusing to interpret. However, similar data can be more easily understood through sonification.

As it turns out, the human ear can quickly tell the difference between the sound of a trumpet and a flute, even if they play the same tone (frequency) at the same loudness and duration.

Why? Because each sound contains higher-order harmonics that help determine the quality, or timing, of the sound. The different strengths of higher-order harmonics enable the listener to quickly identify the instrument.

Now imagine that the information – the different properties of the data – as different powers of higher-order harmonics. Each object studied will have a unique tone, or belong to a class of vowels based on its overall properties.

With a little training, a person can almost immediately hear and recognize all the properties of an object, or its classification, from a single tone.

beyond research

Sonification also has great use in education (SonoKids) and outreach (for example, System Sounds and Strauss), and has wide applications in fields including medicine, finance and more.

But perhaps its greatest strength is enabling the blind and visually impaired communities to understand images and plots to help them in everyday life.

It can also enable meaningful scientific research, and can do so quantitatively, as sonification research tools provide numerical values ​​on command.

This ability can help promote STEM careers among people who are blind and visually impaired. And in doing so, we can tap into a vast pool of talented scientists and critical thinkers who otherwise would not have envisioned a path toward science.

We now need government and industry support to further develop sonification tools, improve accessibility and usability, and help establish sonification standards.

With the increasing number of tools available, and the growing need in research and the community, the future of data sonification looks bright!


LEAVE A REPLY

Please enter your comment!
Please enter your name here