The Journal of Sonic Studies

To refer to this article use this url: http://journal.sonicstudies.org/vol06/nr01/a08

1.1 Listening, soundscapes, and sound design

Listening and hearing are different (Handel 1989), and Szendy (Szendy 2008) tells us that we can choose to listen. Madell and Flexer (Madell and Flexer 2008) define hearing as the acoustic mechanism of sound being transmitted to the brain, whereas listening is the process of focusing and attending to what can be heard. Thus, listening is an active process comprising conscious choice and subjective interpretation of what is heard (Blesser and Salter 2007).

A soundscape can be defined as the surrounding auditory environment that a listener inhabits (Porteous and Mastin 1985; Rodaway 1994; Schafer 1977). The soundscape surrounds the listener and is an anthropocentric experience (Ohlson 1976). The definition has not been standardized, but there is on-going work to create an ISO standard in order to establish its definition, conceptual framework, as well as methods and measurements of its study (Brown, Kang and Gjestland 2011; Davies et al. 2013). There is no complete model of the soundscape, as interpretation is affected by the sounds which can be heard, the acoustic space which affects the sounds, and listeners’ interpretations based upon what and how they are attending to the sounds (Davies 2013).

Luigi Russolo, as part of his 1913 Futurist manifesto, encouraged musicians to analyse noise in order to expand their sensibilities (Russolo, Filliou, Pratella and Press 1967). Granö differentiated between the study of “sound” and “noise” in 1929. He mapped auditory phenomena with reference to the “field of hearing” rather than “things that exist”. Granö did not use the term soundscape; instead the concept of proximity was applied, which represented the area immediately surrounding an inhabitant (Granö 1997). The concept was revisited in 1969 when Southworth tried to establish how people perceived the sounds of Boston and how this might affect the way they experienced the city (Southworth 1969). Schafer (Schafer 1977) and Truax (Truax 2001) attempted to formalise the concept using descriptions derived from existing terms such as soundmarks, rather than landmarks. Schafer (Schafer 1993) argued that all soundscapes should be designed or regulated to display what he terms high-fidelity (distinct, easily interpreted sounds), rather than low-fidelity (indistinct, difficult to interpret sounds). Soundscapes and the individual sounds that make up a soundscape have been shown to have a physiological and psychological impact upon listeners (Cain, Jennings and Poxon 2013). Sounds that are considered unpleasant cause a reduction in heart rate, and pleasant sounds lead to an increase in respiratory rates (Hume and Ahtamad 2013).

The work of the sound designer is to create an aesthetic combination of sound events, to produce a soundscape that is informative and/or evokes an emotional response in the listener. For example, in film and other linear media, sound may be used as a sleight-of-hand, making the audience believe that something has happened (Chion 1994). Video game sound designers have adopted many of the techniques associated with film sound (R. Newman 2009), but have added interactivity so that some of the sound events are directly controlled by gamers’ actions, whilst other sounds remain passively experienced within non-interactive sequences (Collins 2008).

Sound designers routinely manipulate the attributes of sound as part of their everyday practice. These include the sound’s pitch, loudness, timbre (or overall quality of the sound), duration, and direction. For example, the length of a sound can be used to convey a character’s emotions, such as a longer doorbell ring suggesting impatience (Kaye and Lebrecht 2000). The length of a silence (or lack of sound) can be useful to convey the passage of time or a change of location (Beaman 2006). Changing a sound’s pitch can make objects seem larger or smaller or alter the age or gender of a character (Beauchamp 2005; Collins 2008). Spatial cues, such as panning, can provide an insight about what a character is attending to (Beck and Grajeda 2008; Kerins 2010).

In interaction design, designers of auditory displays are concerned both with sounds being considered informative as well as creating appropriate acoustical properties (Brewster 2008; Buxton 1989). For example, Gaver’s Sonic Finder used auditory icons such as a scraping sound for objects being dragged across a computer desktop and a scrunching sound for putting a file in the wastebasket. Similar sounds are used on Apple’s operating system to this day (Gaver 1989). Microsoft’s Outlook email client, in contrast, uses abstract earcons, such as a soft tinkling when an email arrives in the user’s in-box.