PaperPlayer

Share this post

The Neuroscience of Listening to Science

blog.paperplayerapp.com

The Neuroscience of Listening to Science

Different modalities hit different in the Sciences

@brekolazh
Jan 10
3
Share this post

The Neuroscience of Listening to Science

blog.paperplayerapp.com
Article voiceover
1×
0:00
-9:52
Audio playback is not supported on your browser. Please upgrade.

Happy New Year! We’ve been heads down building PaperPlayer and have a lot to share in the upcoming weeks and months. This is our second blog post that gets at the way we’re thinking about scientific audio and more effective ways to consume scientific content. The neuroscience of listening versus reading research content is something we think about way too much. We thought it might make sense to share some of the neuroscience of content consumption and the differences in learning across modalities.

You’re receiving this email because you signed up for updates on our site. ou’re in good company with thousands of listeners that have enjoyed nearly 5000 new audio abstract episodes. We invite you to share your favorite abstracts and new articles with us on Twitter and reach out with any feedback you might have!


You might be a scientist if you’ve experienced this: rushing to hit time points on your assay to get that last minute data for lab meeting while being helplessly behind on the literature. The game has changed for those living on the technical edges of discovery. With over 2 million research articles published per year, staying informed - let alone up to speed - with the literature in your field is daunting.

We recently wrote about how this compelled us to build PaperPlayer, a new way to consume open science for busy scientists, researchers, and technologists. And as neuroscientists we wanted to dive in a bit deeper on the problem space. Not just keeping up with the content but actually learning it. The neurosciencey term is encoding and there’s a deep body of work in the field of learning and memory that we’ve only begun to consider when consuming information. Let’s dive in.

Learning defined by neuroscientists

We all know what learning feels like, but defining a neural basis for learning might be a bit more of a stretch. Our brains are made up of billions of cells functionally designed to retain information by connecting with other cells. The key functional cells are called neurons and as we have discovered more about how they work scientists have been able to grow our understanding in the fields of learning and memory. When our senses (taste, sight, hearing, etc.) take in information from the world around us, the information is converted into usable energy formats that neurons can work with. This next part gets a little jargony so feel free to skip ahead if that’s not your thing. 

This process of encoding is made possible by the neuroplasticity of neurons, dendrites, and synapses. Learning and memory are essential to retention of the neurodata and evolved systems are dynamically built around the kinetics of the modality. In fancy neuroscience terms, memory is formed by the electrochemical potentiation to create long-term potentiation (LTP) that stores enduring memories. Some brains learn visually while reading a concept, converting it to short-term memories in the midbrain (starting with the hippocampus) and then storing those concepts in the neocortex (mainly the temporal lobe). Other brains learn optimally by listening to a concept, decoding it using Wernicke’s area, converting it to short term memories (again in the hippocampus) and then storing that information in the necortex. The more frequently we receive signals in this process, the stronger the connections between the neurons become and the higher the quality of the learned memory. 

Over our lives we all develop a preferred modality of regular learning and we are capable of reinforcing that learning and memory with additional signal inputs. Our brain’s complex dynamical systems are also capable of evolving complementary systems of reinforcement so listening while visually reading create more distinct LTP. Reading and visualizing enhance LTP and so on.

Modalities of learning

A bit more on the modalities. First, there are so many cliches about our brains floating around on the internet these days (FiVe eAsyy StePs tO trAiN your BrAin, dopamine this and serotonin that) but one we universally agree on is how complex human brains are. Our brains are thirsty for energy, and easily overwhelmed with signal (👋 #ScienceTwitter). We’re only beginning to fully appreciate how the mechanisms of learning work. And these learning mechanisms can vary from person to person and are different for everyone. Some of us learn best by reading. Others by doing. Others more by listening. As Dr. Vega Shah recently highlighted, the brain is activated to learn differently using different modalities.

Used with permission from Dr. Vega Shah

She also touched on one of our favorite features, multitasking, that differentiates audio from text that we’ll revisit later.

With advances in technology, we are entering a world where high fidelity learning and retention can be reinforced via these different but overlapping modalities. One modality most of us as scientists have paid good money for is listening to lectures, seminars and colloquia on scientific topics. 

What if there was a seamless and enjoyable way to have all the emerging content in your field available for listening while you changed media on your cells, waited for your PCR run to complete, or while out walking during lunch or commuting to the lab?

Signal to noise

So, learning takes lots of energy and time but what happens when we barrage our noggins with too much information? Our brains function similarly to the memory on our devices: when data gets fragmented the devices lose performance and fail to return the information in a useful format. The world we’ve recently built around capturing and distracting our attention on 90 second videos and 240-character tweets results in noise that can majorly interfere with our ability to learn new information. We continue to experience this with the rise of pre-prints and a staggering increase in new publications. Increasing LTP and encoding of the information becomes increasingly harder without sorting and new high-fidelity modalities of content engagement.

This doesn’t even begin to address tweet threads, YouTube videos, and podcasts. But doesn’t the brain prefer a particular input over another? Turns out when it comes to interpreting semantic information, functional imaging studies have suggested that - all distractions aside - there is little difference in benefit to reading versus listening.

Activation when reading vs. listening by Deniz et al 2019

The colorful brains above show brain activity when reading and listening. The fiery colored brain shows the correlation with more yellow areas in the prefrontal and temporal cortices with a high overlap across different learning modalities. Meaning that reading and listening are not so different when it comes to encoding new memories.

Fun fact: to play with these data from the Gallant lab click this image for a much more detailed viewer.  

Let’s get a little Sci-Fi

The ability to take in, match, and amplify these modalities all while in the act of learning gets a little sci-fi. The process of metacognition refers to the learner actively thinking about the act of learning. Here’s a great review just in case it is of interest.

MetaCog Meme from Minority Report

So when you’re multitasking (pipetting, writing code, commuting, etc.) and you have noisy inputs it’s hard to think deeply about the content you’re processing. The brain is not great,unless you train it otherwise, as a system capable of deeply learning from multiple simultaneous inputs. This changes (again, shoutout Dr. Shah) when the input is more passive and accessible using audio. We think the ability to work, listen, and think while engaging technical research papers is the future of scientific publishing and content. 

How might this look? Here’s a sneak peek at the product we’re building that will bring a multimodal, full-text audio experience using AI. If this is of interest, we would love for you to hop on our early access list and share your feedback on the beta.

A first look at the PaperPlayer Product

The accessibility of audio

It’s not just scientists and researchers who are perpetually barraged by noisy videos, emojis, and memes. We all have varying abilities and gifts when it comes to consuming information and learning. What happens when one modality is unavailable? Another game changer PaperPlayer brings to the table is the increased accessibility for neurodiverse users to consume science. Audio helps across the spectrum of different access use cases from dyslexia to blindness, Parkinson’s disease to ADHD and beyond. Again we’re just starting to explore this space and we’re very compelled by the capability of technology to increase access to users and enhance learning and memory for all. 

Leaning into new ways to learn

We’re thrilled that the game is changing and new AI-enabled modalities to engage with scientific content are becoming more of a reality. We also know our work wouldn’t be possible without all of the amazing and dedicated open access advocates in the space, like bioRxiv and eLife who are advancing many fields with their open access innovations. We look forward to continuing to collaborate with these pioneers and will be dedicating some thoughts to the future of open publishing and new product experiences for busy researchers.

If the future of scientific content excites you, give us a follow and share PaperPlayer with your research group! We look forward to hosting your next preprint in audio form and expanding accessibility in science.

Christian and Taylor

PaperPlayer

Share this post

The Neuroscience of Listening to Science

blog.paperplayerapp.com
Comments
TopNewCommunity

No posts

Ready for more?

© 2023 PaperPlayer LLC
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing