Imagine. You’re inside your car, radio is on playing some random music. Suddenly you hear this unfamiliar song that you find interesting and you want to download it later into your mp3 player. Problem is the dj didn’t metion the title of the song at the beginning. Now what will you do?
For someone like me, I try to identify some lines in the the song and then Google it afterwards. This works, when the lyrics are clearly enunciated. But this strategy fails especially when the singer has a different accent, mumbles the lyrics, or sings from a skyscraper with volume barely above a whisper (Google: Skyscraper lyrics). This makes wish the radio has a replay button then hopelessly cry out, “ANSABE?”
Aside from the voice quality, other characteristics influence effective and accurate perception of speech or musical lyrics. One factor, which radios lack, is the corresponding visual stimulation. Facial expression, movement of the lips and hand gestures that are visually perceived, do not only enhance music appreciation, but also improves understanding of the lyrics. Particularly, lip reading has been found to significantly affect perception of musical lyrics.
A study conducted by Miguel Hidalgo-Barnes and Dominic W. Massaro(2007) has looked into the effect of seeing a corresponding face in improving understanding of sung words. For this study, they used phrases from the song “The Pressman” sung by a band called Primus. This particular song was chosen in order to prevent familiarity to the song from affecting the participants’ performance. By using a speech alignment program, the researchers were able to transform text and wave file into a computer animated face, which they prEach participant were subjected to three presentation conditions. .One involved purely auditory stimuli, wherein the participant hears the sound of the lyrics. The second condition involved visual stimuli. For this, participants were presented with the previously aligned animated face mouthing the lyrics. The last condition had the stimuli presented in both modes, auditory and visual. The participants task was to encode the lyrics they were able to understand and their performance was assessed by identifying the proportion of accurate words.
Results showed that word comprehension was significantly improved through bimodal presentation. 28% and 4% was the proportion of understood words in the auditory and visual lyrics, respectively. On the other hand, 33% of the words were understood when both the animated face was seen and the sound was heard. Indeed, visual information, particularly the singer’s face, improves perception of musical lyrics.
How then can we utilize these findings? For one, the music industry can take advantage of this information. To make sure the market can understand and perceive their songs’ lyrics, they may find live performances an effective way in expanding their fan base. Therefore, reaping all the big bucks. *Ka-ching ka-ching* For those who value their art more than its monetary equivalent, they may find this knowledge as a way to better share their craft to the people who truly appreciate- those who just find themselves crying while watching and listening to the live performances of their favorite artists in Youtube. The visual information provided by their face, does not only improve understanding of the words. Visual input also helps in expressing the emotions of the song and enabling the audience to relate better- as if those “hit-home-lines” aren’t enough.
With this, I leave you a clip of one of those artists that automatically sense up my lacrimal glands. You will never utter “Ansabe?” with Adele playing on loop.
Hidalgo-Barnes, M., & Massaro, D. W. (2007). Read my lips: An animated face helps communicate musical lyrics. Psychomusicology, 19, 3-12.