Journal of Sonic Studies, volume 6, nr. 1 (January 2014)Iain McGregor; Phil Turner; David Benyon: USING PARTICIPATORY VISUALISATION OF SOUNDSCAPES TO COMPARE DESIGNERS’ AND LISTENERS’ EXPERIENCES OF SOUND DESIGNS
The same sound design can be experienced differently by listeners based upon their personal interests and training. Visualising soundscapes provides an insight into listeners’ experiences so that they can be easily compared. To be effective for designers, the soundscape visualisations need to be applicable to a wide range of soundscapes, such as auditory displays, games, films, and so on. Accordingly, ten sound designers for different media were asked to design a soundscape that they would be interested in having visualised (see Table 1). Listeners listen to a sound design, classify all of the sound events that they are aware of using the attributes shown in table 2, and then the results are collated and visualised to illustrate both the designer’s and listeners’ experiences.
2.1 Participantsnext section
Ten professional sound designers were recruited via email. They worked professionally in a variety of fields from interface design through to games, film, television and radio. The 100 listeners were either staff or students at Edinburgh Napier University. None of the listeners had previously taken part in a listening study before. The participants all considered themselves to be without hearing difficulties and ranged in age from early twenties to late fifties. Both male and female participants took part with a ratio of approximately 3:2.
The ten sound designers were asked to supply a sound design that they would like to have visualised. The choice of design was left to the sound designer, and no guidance was given about length or complexity. The tests were conducted in a quiet office with stereo loudspeaker reproduction, except for design 9, which required a surround sound system located in an isolated, acoustically untreated room.
For six of the designs, participants were asked first to listen to the complete design and then classify the sound events. For the other four sound designs, participants were played short sections and asked to rate specified sound events based upon what they had just heard. The decision as to which approach was adopted was left to the designers. Questioning about the attributes of each sound event was conducted verbally, with listeners having access to the grid (for identifying spatial attributes) and the list of attributes (see Table 2). The classification itself was based on the principle of a common language, having been derived from a lexicon generated from descriptions used by participants to describe what they were listening to (McGregor, Leplatre, Crerar and Benyon 2006) and a questionnaire where audio professionals were asked for terms that they used to describe sounds (McGregor, Crerar, Benyon and Leplatre 2007). This meant that the resultant terms should be meaningful to both groups.
The procedure involved classification, visualisation, and a survey. The designers rated their designs based on the specified attributes and forwarded them for visualisation. Listeners then classified the designs using the same attributes, and the results were visualised.
Listeners were randomly assigned to each sound design until 10 participants had experienced each design. All of the participants were able to complete all of the tasks without prompting. The responses from the listeners were collated, and the mode for each attribute was calculated for each sound event. The results were translated into two different visualisations. The first visualisation represented the designer’s intentions, the second one the combined listeners’ experiences. For this iteration the visualisations were generated manually. However, an automated version has been proposed, so that listeners and designers could create their own visualisations in the future. The results for all twelve designs are shown in the following section along with brief discussions.