As you know from the “Story of the Stories” link on the blog, I did not start out to create stories2music. I was merely teaching myself how to do sound editing so that I could teach digital storytelling in my college classes. However, something interesting happened that still intrigues me to this day.
After I narrated my first story in Audacity, I randomly choice four, short orchestra film music pieces as background music. When I played the mix back, something mind-blowing happened. The music fit the story so perfectly that it sounded as if the music pieces were written specifically for the story. It was mind-boggling how perfect it was.
Well, this intrigued me, so I did another story. The same thing happened—the music matched the story perfectly. I did it again and again over several years, and all of the stories have perfectly-matched music. For two stories, I experimented with added sound effects.
Throughout this process, I figured out how to license graphics and the music pieces so that I could put them up on a website for family and friends to hear. I created the stories2music website.
One thing puzzled me though. I clearly heard how precisely the music matched the stories, but I didn’t think that other people heard it as clearly as I did. They didn’t seem to be as shocked and amazed as I was by the match. Perhaps the stories weren’t that good, or the production of the audio stories wasn’t that good (you do have to listen to them with headphones to hear the incredible nuances of the music pieces), but this still intrigued me. Why did I hear it and other people didn’t?
Media Grammar and My Experiment
This last fall, I took a “History of Multimedia” class at Palomar College for professional development. For the final project, we had to design a study about some aspect of multimedia, create survey questions, interview people and create a PPT presentation or video about our findings. Although this was an undergraduate class, the professor knew I was a college teacher, so she allowed me to do a more complex study.
In this class, I learned about the concept of media grammar, which is “the underlying rules, structures and patterns by which a medium presents itself and is used and understood by the audience” (Pavlik & McIntosh, 2017, p. 44). This put a name on the puzzle of why I could understand the “music grammar” of my stories. I could hear how the music matched the stories in very precise ways. I also realized that the music created a richer, more imaginative and emotional experience. The narrated stories, separate from the music, were not as vivid and moving. There was a connection somewhere.
I started to think about the old radio shows that had actors, music and Foley sound effects, so I began to do some research. There was some research on how film music affected the viewer’s emotions, but there wasn’t much yet about how music and sound effects enhanced audio stories (I did find two Master’s theses on the subject). Two quotations resonated with me:
“Radio and recorded music have their own grammar, one based only on sound . . . which can be used to convey information, capture attention, or evoke a mood or scene” (Pavlik & McIntosh, 2017, p. 45).
“Once music is linked with a visual narrative, it takes on elements beyond that of simply musicality—it takes on a character of its own, becoming almost as another player in the story, one with its own perspective, voice, and interrelations with other characters. Given the positive impact of music on film, one might wonder whether similar results would be found when combining specially-composed music with a fiction text” (Strong, 2013, p. 5).
I decided to explore this idea for my multimedia project. My study would try to answer the following research question: In an audio story, does the inclusion of orchestra film music and sound effects enhance the vividness of the images created in the participants’ imaginations and more effectively maintain their attention?
For my project, I created a sound experiment and a survey that tested the above research question. One of my stories2music—Aurora’s Secret—has music and added sound effects (thunder clap, rain, thud of dirt on casket, closing carriage door, and carriage moving off), so I used that story for the test.
I had my participants to listen to three versions of the story:
- The version with only the narration without music or sound effects.
- The version with narration and music but no the sound effects.
- The version with narration, music and sound effects.
After listening to each version, the participants answered the survey questions.
Here is the link to the web page with the audio clips: http://www.stories2music.com/gcmw100/gcmw100_survey.html
(You might want to do the test yourself to see how the music and sound effects affect your imagination. I think it will surprise you.)
What I Learned
Even though this was a very small statistical sample size (three people), I learned some interesting things. Stories can be communicated in three ways: reading, hearing and seeing.
Reading a story seems to be the purest way of allowing the imagination to work unhampered. Everyone’s images of the words being read are unique for each person based on what they know and have previously experienced.
Hearing a story is more natural because our brains seem to be wired for oral storytelling. However, the narrator can influence the images created in the imagination. For example, if the narrator reads a sentence with a certain inflection (sarcasm, for example), then that tone influences how the listener views the character. The narrator is influencing the imagination, to some degree, especially if the listener would not have thought the character was sarcastic when reading the text.
Seeing a story (movies/tv) allows the imagination the least freedom to work. Films images tell the imagination what to see. Film music adds another layer because it tells the viewer how to feel about the scene.
When my participants listened to the story without music or sound effects, they described the scenes very specifically on the questionnaires. It was clear that the words did generate similar images in all three participants’ imaginations. The words did create emotions in some cases.
However, when the music was added, it changed their original imagery. They described the same scenes differently. The emotion was deeper; one participant nearly cried. The music told the participants what to feel. The music changed their imagination and perception of the story.
The sound effects also changed the original imaginative images. One participant said the mud falling on the casket sounded heavier than she had originally imagined, and she reasoned that the rain made the dirt muddy and heavier–something she did not imagine originally. One participant had a stronger sense that the grieving woman was actually leaving in the carriage with that sound effect. That same participant said that he somehow missed the idea of “fierce rain” in the first version, but the sound effect of the rain brought that to his attention, so his view of the scene changed.
You can read the whole research documentation and see the presentation on my website: http://www.stories2music.com/KM/new_site/research.html
What began as a simple desire to learn sound editing has grown into 16 stories and a research project. Hopefully, more people will do research on this top. I know that there is a movement in the audio book industry to do “full cast” productions of audio books with music and Foley sound effects similar to the old radio shows: https://www.publishersweekly.com/pw/by-topic/industry-news/audio-books/article/69642-audiobook-publishers-go-big-on-full-cast-productions.html
The time has come for these types of projects.
Pavlik, J. V., & McIntosh, S. (2017). Converging media: A new introduction to mass communication. New York, NY: Oxford University Press.
Strong, A.E. (2013). An empirical study on the effects of music and sound effects in fiction e-books. (Unpublished master’s thesis.) Brigham Young University, Provo, UT. Retrieved from https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=4911&context=etd