Stuart Eve defines Augmented Reality (AR) as when the user works in the "real world while visually receiving additional computer generated or modelled information."1 Following Eve, my project is a kind of Mixed Reality (MR) because listeners interact with visuals and sounds from their personal computers rather than moving through the industrial sites. Grimshaw and Garner argue that we cognitively offload sound onto virtual environments - "[t]hat is, I cognitively place sounds in the game world in which I am present despite knowing that the actual location of the sound wave sources is at the headphones or loudspeakers."2 AR is an affective virtual tool because the immersive experience produces meaningful connections with environment. Music is an important parallel: Shuhei Hosokawa notes the impact of the walkman as a kind of AR. By choosing what one listens to, the walkman cuts through the uncertainty of urban sounds and affectively changes one's environment.3 Bull similarly notes that listening to music on an iPod is an assertion of urban agency since listeners fashion an aesthetic experience of their environment.4 Hosokawa and Bull show Grimshaw and Garner's point that hearing sound through media devices is more than a recognition of noise from one's headphones. Rather, listeners focus on the meaningful sensations produced by their interactions. We construct a narrative from our experience and thus both AR and MR have the potential to influence a listener's historical imagination.
Websites are generally silent spaces. We do not expect most websites to have sound, unless you anticipate a video or visit a music streaming service. We expect silence from static photographs and view historical websites as sources of information, not digital performances that we hear. Throughout my website, users trigger unexpected sounds by hovering over HTML elements like images and buttons. Usually sound is noted on a web page by play/pause controls. In my website, it is overlayed and specifically located cursor hovering plays and pauses it. The affective power of sensory experiences in shaping memory has come a long way from Proust. Current research in neuropsychology has taken Proust's ideas from literary studies to scientific research. In 2010, Daniela Schiller et al. published a seminal paper on cognitive behavioural methods to decrease the impact of traumatic memories. They hypothesized that because the brain consolidates memory as information to emotional responses, everytime we recall a memory, it is rewritten.5 Schiller found that she could use memory reconsolidation to break the association between a memory and a particular emotion. If she reconstructed the situation of the memory but removed the triggering event that the subject associated with fear, they began to rewrite the memory as more neutral or even positive.6 This suggests that the sudden use of sound in my MR can shape both the memories of Pembroke's workers and the historical imagination of residents who only know the past through stories and local histories. Unexpected affect can have a powerful impact on the historical imagination continually recreated by my website.
Perhaps the best way to understand the digital experience of sound in MR is to build our understanding through two ideas, synchresis and transduction.
Michel Chion says of synchresis that "for a single body and a single face on the screen, thanks to synchresis, there are dozens of allowable voices - just as, for a shot of a hammer, any one of a hundred sounds will do."7. It follows that synchresis only works through assumptions about phenomena. Using the hammer example, one has to know what a hammer sounds like for the post-production hammer sound to work under the guise of synchresis in that scene. Synchresis is thus useful in representing task. I recently re-watched the film Saving Private Ryan. There are two scenes in particular where the use of synchresis is powerfully evident. As Tom Hanks' squad moves into a village, the viewer is confronted with a visual of rain hitting wide, thick leaves. It is not immediately obvious if we are hearing rain or distant machine gun fire. The sounds blend together as the scene gears up and the squad enters the village. In another scene, the squad moves out of the village to continue their mission. The scene is mostly dark as flashes in the distance light up the soldiers crossing a ridge in the centre of the screen. It is again not immediately obvious whether the booming sound and distant flashes are thunder and lightning or artillery exploding. The producers deliberately confuse the visual and audio elements of synchresis to force viewers to become aware of sound and question its place in on screen.
Synchresis is important to this project because it allows me to connect industrial sounds to the narrative of Pembroke's industrial past. But synchresis does not translate from film to any platform. The workers in Pembroke know the machines they used and the sounds they produced. Even to an uninformed user, stock sounds match the visual but are obviously not produced directly by the action on screen. Perhaps film is the only platform for strong synchresis because of the mechanical reproducibility we associate with the modern video camera, a tool that captures a moment of reality. In Saving Private Ryan, the powerful opening scene shows the relationship between synchresis and affect. The background explosions, whizzing gunfire, and yelling are mostly added in post-production. The interaction between sound and visuality is so seamless that it appears the sound was produced by the action on screen.
With my project I have merely established a weak synchresis by adding audio to the visual elements of my website. When a sound is triggered, users know that the sound comes from their speakers or headphones. But that sound is connected to the historical context on screen. Synchresis in my project acts similar to those two scenes from Saving Private Ryan where the use of synchresis is obvious. Phenomenologically speaking, weak synchresis is a more obvious cognitive experience but can shake one's expected phenomenological experience. For instance, Veitch's soundbashing which focuses on acoustics over sound content can disrupt the flow of sound in space, thus helping users to understand how we experience sound. Following Eve, the sensations generated by weak synchresis can also produce affective experiences. Therefore, the strength of synchresis is mostly irrelevant.
Stefan Helmreich argues that because we hear sound through digital platforms, the "soundscapes of modernity" present sound to us at an "aesthetic and conceptual remove."8 Hemlreich introduced the term transduction, "the transmutation and conversion of signals across media that, when accomplished seamlessly, can produce a sense of effortless presence,"9 to sound studies. Transduction is a difficult definition to comprehend, however, because of its malleability. Helmreich developed the definition from a deep sea dive. He was in a submarine deep under the ocean when he realized that the digital feedback from machines and voices of his colleagues around him were all transduced - that is, because of the immense pressure of his environment, the the human ear cannot hear sound that deep in the ocean.10 But they were not actually 'immersed' in that underwater environment. The pressurized submarine provided a deceptive environment that allowed them to hear as they would on land. Therefore, Helmreich states that transduction is an argument against our tendency to define digital experiences as immersive because we are actually deceived.11
For the historian Charles Hirschkind, the soundscape of Egypt's religious resurgence of the 1980s existed mostly through recorded voice.12 Men listened to Islamic preachers on cassette tapes in taxis, with portable audio players, walking, or sitting. Their experience was tied to the experience around them in an era of growing religiosity in markets, near minarets, etc.13 Hearing religious texts was integral to being more religious and in touch with God.14 The recorded voice acted as an interplay between listener and religious authority. The ability of a religious leaders voice to be recorded onto tapes to influence religious experience is transduction for Hirschkind. But religious sermons on tape were a specific type of sound experience designed to be listened to anywhere.15 Anyone could access my website on a smartphone and move through Pembroke - that does not affect the transduction. But it was designed to be accessed in a relatively quiet environment like one's home (moreover, many of the industrial sites are physically inaccesible). My digital landscape thus confronts specific contextual issues experienced only through an MR.
Like synchresis, we are more aware of transduction in recordings and digital landscapes (even if we think they are authentic representations of environment) than in physical space. I follow Helmreich when he says that "[t]ransduction may not work everywhere."16 My website seems caught between transduction and synchresis. In many ways, transduction and synchresis are similar phenomena because the link between sound and environment is contrived. But unlike synchresis, transduction is more obvious because we are immediately present in a 'deceptive' environment like my website. If we are unaware of our environment, transduction can appear to us as immersion. While transduction may seem to be a critique against phenomenology because we are aware of the methods that sonify our immediate environment, we are in fact recognizing how sound is constructed in space - how recorded sound shows us that we are aware of transduction to an extent, since the sounds are digitally constructed though equally powerful in their own way. You are obviously aware that my website is a transduction of the physical Pembroke landscape. In fact, as much as we transduce to create immersive environments (an example of this is augmented virtual reality like the Oculus Headset) an obvious transduction allows you to understand how the sensations of experience are constructed but still powerful. With most soundscape projects, completely seamless transduction is rare, but they are still affective works. Likewise, listeners are aware of the augmented experience of reality of my project. To append the original definition, then,a digital project at an "aesthetic and conceptual remove" can still produce powerful, affective experiences.
After I had built my website, I visited Pembroke to show relatives, residents, and local heritage stakeholders and record their reactions. Aside from general feedback, I wanted to test the Proust Effect throughout my website. I interviewed a total of nine people: four in one interview, three in another, and one alone, each in one hour sessions. I had to interview my uncle over email due to extenuating circumstances. I found an underwhelming reaction to the sounds. Everyone was generally impressed with the website and thought that sound enhanced the user experience, but I often had to push the topic of sound. When I interviewed my Nana, she made several comments on the banality of sound in her memories. She explained the entire rail system in Pembroke, how they used to ride the train, and that they lived between the Canadian Pacific and Canadian National Railway lines. But she does not remember the daily movement of the trains which, she noted, one might assume would be "terribly noisy."
Shawn Graham et al. faced a similar issue with their HeritageCrowd project. The project was a crowdsourced public history initiative that curated local heritage stories from Western Quebec through text messages, Twitter, and web forms. The goal of the project was to "provide a new avenue for nonprofessional knowledge to enter into the academic world of knowledge production."17 However, users took the project in different ways. Their website was supposed to be a platform that turned user comments into a collaborative discussion. But confusion over the process of the crowdsourcing of history lead some respondents to send messages without understanding the purpose and others to doubt their history as professional enough.18 Similarly, I found most of my interviewees were confused over the point of sound in my project and spoke generally about Pembroke's industrial history. This was not a negative outcome of my project. As Graham et al. argue, "while it might not (cannot?) produce a polished, singular view, the aesthetic pleasure will lie in the abundance of perspectives that it provides."19 Perhaps as my project gains popularity in Pembroke, I will receive more responses on my sonification. If HeritageCrowd is any indication of unexpected responses to public history projects, I will still receive important perspectives of how people understand Pembroke's history.
The most excited response I received about senses and memory concerned the smell of Pembroke's fibreboard plant. One of my young Pembroke interviewees, Sarah, stated that the sounds did not affect her, but she can often smell the fibreboard plant south of Pembroke. I interviewed Sarah to understand how my sound impacted memory and could shape the historical imagination of someone who knew little about Pembroke's history. Her experience raises an important point: the lack of interest in sound also indicates that the Proust Effect is working. Van Campen's definition of the Proust Effect (see the introduction) assumes that the impact of sensory input on memory is sudden. While Schiller's research in memory consolidation shows this is true in some cases, I want to suggest that the sudden trigger of memory caused by sensory input is not the only condition of the Proust Effect. The immediate impact of sensory input on memory is only one facet of Proust's theory of memory. Several interviewees even recalled specific memories after the interview. My interviewees showed that humans have more nuanced reactions to sound than Van Campen's Proust Effect assumes. Perhaps I have focused too much on the sudden, noticeable affects of sound. Schiller's research inversely applies here because when my interviewees attached nothing significant to a sound trigger, sound produced no immediate affect. My work, then, influences the significance of the historical imagination where it otherwise would be relatively insignificant. My Nana's most vivid memory of sound in Pembroke was during the Second World War. Pembroke was located near Petawawa Army Base, a major training location for soldiers. A siren sounded each night at 9PM to warn children to return home to, she noted, protect them from the massive influx of "strange" men to the region. Nana's memory shows the affective power of discussing sound. My website has no siren because it is not part of Pembroke's industrial history. But Nana shared her story about the siren when she was showing me where they lived in Pembroke.The lack of sound's immediate affect in my project allows sound to contest memory - its 'failure' in the context of a history project caused interviewees to comment on other unique experiences. These comments show that landscape is a fluid experience of connected sensory processes.
Because her reaction occurred outside of the sudden experience of sound, Nana showed that sound can trigger powerful visualizations of memory within any timeframe. This suggests the importance of digital projects as somewhat longitudinal studies where we continue to interact with our interviewees. If my website is a space where people can experience the industrial soundscape repeatedly, then I must follow-up with them. Shaping someone's memories and historical imagination in a single interview is much different than a repeated activity. Schiller discusses how every Holocaust Memorial Day in Israel when sirens sounded to initiate the day, her father, a survivor, would stay seated and sip his coffee.20 She initially attributed this to a sensory cue that triggered traumatic memories in her father. As she developed her memory reconsolidation hypothesis, however, she realized that "her father was rewriting his painful memories by associating them with a pleasant activity."21 I cannot expect sudden, powerful reactions from my interviewees, especially in one hour sessions. Instead, I must be open to the possibility that people may interact with my project in unexpected ways, especially on their own time, or deliberately choose how they react in front of me. This all points to the importance of sharing authority in this project which I discuss in the following section.
1 Stuart Eve, Dead Men’s Eyes: Embodied GIS, Mixed Reality and Landscape Archaeology (BAR British Series 600, 2014), 20.
2 Mark Grimshaw and Tom Garner, Sonic Virtuality: Sound as Emergent Perception (New York: Oxford University Press, 2015), 35-36.
3 Shuhei Hosokawa, "The Walkman Effect," in The Sound Studies Reader (New York: Routledge, 2012), 106, 109-110.
4 Michael Bull, "The Audio-Visual iPod," in The Sound Studies Reader (New York: Routledge, 2012), 207-208.
5 Daniela Schiller et al., “Preventing the Return of Fear in Humans Using Reconsolidation Update Mechanisms,” Nature 463, no. 7277 (January 7, 2010): 49.
6 Ibid., 52.
7 Michel Chion, Walter Murch, and Claudia Gorbman, Audio-Vision: Sound on Screen (New York: Columbia University Press, 1994), 63.
8 Stefan Helmreich, “Listening Against Soundscapes,” Anthropology News 51, no. 9 (December 1, 2010): 10.
12 Charles Hirschkind, The Ethical Soundscape: Cassette Sermons and Islamic Counterpublics, Cultures of History (New York: Columbia University Press, 2006), 2.
13 Ibid., 7.
14 Ibid., 6.
15 Ibid., 26-28.
16 Helmreich, “Listening Against Soundscapes."
17 Shawn Graham, Guy Massie, and Nadine Feuerherm, "The HeritageCrowd Project: A Case Study in Crowdsourcing Public History," in Writing History in the Digital Age (University of Michigan Press, 2013). http://quod.lib.umich.edu/d/dh/12230987.0001.001/1:9/--writing-history-in-the-digital-age?g=dculture;rgn=div1;view=fulltext;xc=1#9.3.
20 Stephen Hall, “Repairing Bad Memories,” MIT Technology Review, http://www.technologyreview.com/s/515981/repairing-bad-memories/.