Research-Creation

Photograph of Margaret Howe and a dolphin on the flooded balcony at the Communication Research Institute, St. Thomas, U.S. Virgin Islands, John C. Lilly papers, M0786. Dept. of Special Collections and University Archives, Stanford University Libraries, Stanford, Calif.

The Dolphin House (Audiovisual Installation)

Combining archival material, data sonification (sonic representations of data) and storytelling, The Dolphin House sonically remaps Cold War research on human-dolphin communication and situates it within the vertical cartography produced by the global expansion of military and scientific remote sensing infrastructures. The composition is based on archival recordings produced at John C. Lilly’s Communication Research Institute’s (“CRI”) Dolphin Point Laboratory in St. Thomas, U.S. Virgin Islands where Lilly and his team carried out an ambitious research program that aimed to establish two-way communication between humans and dolphins.  

The Dolphin House began as a tool for research on the history of the CRI. I initially arranged the archival recordings into “rooms” on a Digital Audio Workstation so that I could listen to them in a systematic but non-linear way as part of my research on this history of sound in the Cold War. Gradually, the DAW setup and the processed recordings evolved into The Dolphin House, which recreates the soundscape of St. Thomas facility and, just as importantly, retraces the institutional and technological networks that supported and shaped the CRI.   

The U.S. Navy, Air Force, NASA, among other scientific and military institutions, provided funding and other support to the CRI to develop the St. Thomas facility and carry out a novel but controversial research program on human-dolphin communication. These institutions had a shared interest in advancing Lilly work for various Cold War-related ends related to the conquest of vertical space (ocean, atmosphere, outer space).

One of Lilly’s key collaborators, who can be heard in several parts of The Dolphin House sound composition, was Margaret Howe. Howe was a key figure in the development and implementation of a months-long experiment in which she would live with a dolphin 24-hours a day for several months to accelerate the “reprogramming” of dolphins to communicate with humans (a story that has been retold in a number of news stories in recent years as well as a BBC documentary). The redesigned lab, informally referred to as the “flooded house” or the “dolphin house,” was configured to facilitate prolonged human and dolphin cohabitation through, for example, rooms and a balcony flooded with 18 inches of water. It was also fitted with microphones suspended from the ceilings to record dolphin vocalizations as well as Plexiglas tanks in which dolphins would listen to recordings of human speech and be rewarded with fish for attempting to mimic the sounds of the humans. Just as the house was flooded to bring human and dolphin participants close together for weeks or even months at a time, so too the house was saturated with acoustic technologies for capturing and manipulating the vocal sounds of humans and dolphins.

Unintentionally, the recording system picked up various types of “noise”: elemental (e.g., water and wind), technological (e.g., tape hiss), and social (conversations that were not so much documenting interspecies communication research but expressing frustration with various aspects of working at the lab). These noises have been processed and arranged in The Dolphin House to create a “sonic blueprint” of the CRI and its interior spaces. Like the blueprint for a building, which outlines the spaces for people and objects as well as the underlying infrastructure, The Dolphin House has several layers which in this case correspond to human-dolphin communication research practices and the military-scientific apparatus that supported it. The sound composition pieces together parts of the CRI’s archival recordings into separate streams of sound for humans and dolphins which periodically interact. Additionally, the sound composition situates the CRI’s work within the broader military-scientific infrastructure of remote sensing. The installations other sonic layers were produced by “following the money,” that is, by retracing the CRI’s funding agencies and their activities around the world during the 1960s. In this way, The Dolphin House foregrounds what was so often left out of contemporaneous popular media and scientific discourse about Lilly’s work: the fact that the CRI was, from a military perspective, one node among many in a planet-spanning network of command, control and communication.

The composition is organized into the following parts, each of which focuses on a specific room or facility in the CRI and the sonic environment created by the interaction of the building with the technological and natural environments in which it was embedded as well as the human and nonhuman actors who inhabited the various spaces of the lab (the start time of each section is indicated in square brackets):

  1. “A Design for Living with Dolphins (Office)” [0.00]

Featuring the voice of John Lilly, the first “room” in the composition outlines the objectives of the research program and some of the controversial techniques through which the CRI strived to achieve them. The recombined and processed archival recordings articulate the sonic space of Lilly’s office to the recently declassified cartography of the militarized ocean in the Cold War through, for example, the low end pulses that increase in pace over the years (the utterance of “one” corresponds to 1961, “two” to 1962, and so on) as one “moves” through this part of the composition, corresponding to the increase in the number of U.S. nuclear-armed submarine patrols per year in the Atlantic during the period in which the CRI recordings were made (from 12 patrols in 1961 to 80 in 1965). 

  • “What’s Wrong? (Upstairs Pool)” [2:48]

A conversation between Margaret Howe and one of the workers upstairs near the dolphin pool prior to its conversion into a “wet room” in which Howe would spend several months living and teaching English to Peter the dolphin. The conversation alludes to the process of renovating the lab and the way the CRI was “adaptable” to the evolving research program. But unlike the written accounts of the lab’s redesign, Howe’s conversation is suggestive not only of the key role she played in conceptualization and implementation of “wet room” experiment for prolonged co-habitation of human and dolphin, but also the challenges involved in merging aquatic and terrestrial living spaces in the same room. The sonification overlay for this part of the composition is the sound of the Igloo White sensor network, which was used by the U.S. military in Southeast Asia in the 1960s and 1970s to track, locate and destroy people and vehicles. The “wind” at the beginning of this section is white noise that has been shaped according to Igloo White measurements of the ambient noise created by 10 miles per hour winds. This part of the composition also translates Igloo White’s spectral display of a moving vehicle into sound – the chords are generated from wavetable synthesis based on the spectral display of truck’s exhaust picked up by Igloo White’s acoustic detectors (CBS Laboratories, p. 269). The texture and pitch of the chords change as the device moves up and down the time axis of the spectral display and translates it into sound. 

  • “TTY Interlude (Teletype Network)” [5:40]

Saturated with the sound of teletype machines, this part of the composition speculatively reconstructs the sounds inside a teletype network. Narratively, this section focuses on the backstory of the CRI by including excerpts from U.S. government memos (which were sent by teletype between Washington, Miami and San Juan) contained in the Federal Bureau of Investigation’s John C. Lilly file, which was released in 2016 through a Freedom of Information Act request. The memos, originally sent in 1960, concern Lilly’s “problem” – his accusation that someone in the intelligence establishment had blocked his request for secret clearance at a meeting with military officials about his research (the memos deny this and claim Lilly was granted secret clearance in 1959). The sonic texture of this part of the composition was generated from recordings of teletype machines that have been processed and arranged to evoke a sense of tension between Lilly and the military-intelligence establishment – a tension which eventually led the military to partner with researchers other than Lilly who would more reliably produce the kinds of knowledges and techniques they could apply in undersea warfare.  

  • “Dolphin Elevator (Sea Pool)” [6:58]

Featuring the voice of one of the CRI researchers (possibly its former director, Gregory Bateson, though this isn’t indicated in the archives’ note for this audio tape), the sound composition turns to a seemingly tranquil moment in the archival recordings. In keeping with the practice of the CRI, the researcher describes what he is seeing and hearing as a way of providing an auditory version of a lab report or field notes. In this instance, one of dolphins is being moved onto the elevator from the sea pool (used mainly for visual observation through a large window at the base of it) to the upstairs pool (used mainly for audio recording since it was shielded to some degree from the wind and sounds of the ocean). Texturally, the composition gestures to the openness and optimism of the researchers in the early years of the project. At the same time, the high frequency “beeps” are sonifications of the U.S. military’s Corona spy satellite program. Each of the ten beeps (the sequence repeats in a loop) represents approximately 100 images (the Corona satellite captured 930 images in 24 hours on November 19, 1964). The pitch represents the latitude of the area captured in each photograph and the beep’s position in the stereo field represents its longitude. The locations are in the following order (using 1965 names): Soviet Union (northeast), Tibet, China, Kazakh SSR, Soviet Union (Moscow), Republic of Congo, Cuba, Haiti, Atlantic Ocean approximately 300 miles from San Francisco, Soviet Union (northeast). 

  • “Tape Rubato (Electronics Room)” [8:44]

Audio tape was a key part of Lilly’s methodology for discerning communication between human and dolphin. Much of the CRI’s work with dolphins focused on recording and comparing human and dolphin vocalization and listening capacities. This intense concern with identifying moments of correspondence, or what Lilly called “interlocking,” between human and dolphin sound seems to have been motivated by the need to produce results that could be shown to the military and other sources of funding, the scientific community as well as the public. Lilly’s practice of slowing down recordings of dolphin vocalizations and speeding up those of humans to discern likenesses is playfully reanimated in this part of the composition. Lilly’s tape-based methodology is translated into the textual and rhythmic layer through the musical technique of rubato – playing with time by speeding up and slowing down the tempo of a performance – which has been applied not only to the human and dolphin voices in the recordings but also to the sounds of the electronic equipment Lilly used to manipulate sounds, the most prominent of which is the LINC computer. 

  • “Any Interruption (Upstairs Tank)” [11:44]

Margaret Howe was closely involved in the redesign of the lab to facilitate prolonged cohabitation experiments. Indeed, Howe seems to have been the main proponent of creating flood sections of the “dolphin house,” thereby expanding the lab’s “zone of encounter” between human and dolphin. The house was for Howe not simply a container or enclosure for interspecies communication; it was itself a medium that narrowed the acoustic arena down to just the human and dolphin participants (so effectively that at times that it proved to be a source of anxiety and depression for Howe). Overlaid on this part of the composition is the sonification of “Operation Long Shot,” which was an underground nuclear test conducted by the U.S. military on Amchitka Island in the Aleutian islands (Alaska). Long Shot exploded at a depth of 702 meters and was used to test, among other things, the accuracy of a recently developed global seismic sensor network. The seismic waves of the explosion were successfully detected by stations around the world. The low frequency blast at the beginning of the composition (0:00) represents the Long Shot nuclear test at 21:00 hours on October 29, 1965. The low-frequency sounds in “Any Interruption” represent the vibrations at the time they would have been received by the following seismic stations, with the travel time of the blast matching their start time in the sound composition: Rarotonga, Cook Islands (11 min, 43 sec), Valentia, Ireland (11 min, 54 sec), Bermuda (12 min, 09 sec), Istanbul, Turkey (12 min, 34 sec), Shiraz, Iran (12 min, 46 sec), San Juan, Puerto Rico (13 min, 4 sec), Adelaide, Australia (13 min, 16 sec) and Caracas, Venezuela (13 min, 31 sec). 

  • “Self-Luminous Borders (Isolation Tank)” [14:25]

According to Lilly (1972), “If [the dolphin] is put in solitary isolation or left with other dolphins he will not learn anything of our language. If he is forced to obtain satisfaction of his needs through vocalizations with and from human beings, then the beginnings of language may possibly be inculcated in the particular animal.” Although the U.S. policy on LSD research became more restrictive in the mid-1960s, Lilly was one of a handful of researchers who retained authorization to continue this work; between 1964 and 1966, Lilly spent an increasing amount of time in the Dolphin Point Lab’s isolation tank on LSD as a way of exploring the “inner space” of consciousness (Lilly, 1972, pp. 59-60). This section of the sound composition includes an excerpt from one of Lilly’s conference papers in which he describes the design of the isolation tank and his experience of using it as a medium for accessing inner space. “Self-Luminous Borders” also complicates the notion of the “isolation” in this context as a condition of being cut off from the rest of the world or as a sharp divide between Lilly’s work on communication and consciousness on the one hand and his “planetary obligations” and institutional entanglements on the other. While the tank enabled a high degree of sensory isolation, faint sounds could still be heard inside, and water occasionally leaked into the user’s mask. The composition recreates a sense of what it might have been like to float in Lilly’s isolation tank, but it also simulates the disruption of sensory isolation by acoustic and aqueous leaks. It also reimagines the isolation tank as a permeable membrane between the CRI and the oceanic infrastructure of nuclear imperialism. It does so by simulating aspects of the CRI’s institutional and infrastructural context, which in this case includes the U.S. Navy and its mobilization of nuclear weapons at sea. The nuclear militarization of the ocean is figured here as five sonar pings, one for each of the years between 1961and 1965. Each of the returning echoes following the pings corresponds to approximately 100 nuclear weapons installed on U.S. Navy ships and submarines in the Atlantic (right speaker) and Pacific (left speaker). By 1965, there were 1,544 such weapons in the Atlantic and 1,571 in the Pacific. 

Acknowledgments 

I am very grateful to Philip Bailey and the Estate of John C. Lilly for giving me permission to use the Communication Research Institute’s audio recordings in this installation. I would also like to thank the staff of the Department of Special Collections and University Archives at Stanford University for their invaluable advice and support.

Media Archeology of the SOund SUrveillance System (SOSUS)

created by dji camera

During the Cold War, a network of underwater microphones (“hydrophones”) and listening stations in US, Canada, the Bahamas, Iceland and Wales monitored sound in the Atlantic in order to detect and track Soviet submarines. Today, some of the abandoned listening stations are still standing, though they are rapidly deteriorating. In my new research-creation project, I use photography, video and photogrammetry to create 3D models of the listening stations. The 3D models will in turn be used to create interactive physical and virtual platforms for public engagement with the social, cultural and technological elements of this once-secret undersea surveillance system. I am particularly interested in the experiences of those who once worked in these facilities as well as the accounts of those who lived nearby. How did this environment shape the way people experienced the Cold War, ocean sound and ocean space more broadly? What new forms of identity and community were developed within and between the listening “nodes” of the network? How did the construction of these listening stations alter the physical environment and impact local economies, identities and culture? In collaboration with groups with expertise in acoustic ocean sensing and groups with lived experience of the network, the project aims to virtually reconstruct the infrastructure and material meanings of the network, and imagine what new forms of community could be developed around an “audible ocean.” Below are preliminary works which explore the historiographic, narrative and aesthetic possibilities afforded by abandoned sonar infrastructure as well as various media, formats and approaches for the design of physical and virtual models of these Cold War “ruins.”

Preliminary photo shoot of United States Navy Facility Shelburne, Shelburne, Nova Scotia (J. Shiga, July 2018 and August 2019).

The images in this collection, produced by the PI on an iPhone, are intended to provide a sense of current physical condition of the interiors of NAVFACs as well as the historiographic, aesthetic and narrative possibilities afforded by the site’s location, the encroachment of vegetation, and traces of activity before and after decommissioning. The research-creation component would entail revisiting the NAVFAC with professional video, audio and photography equipment to generate content that will be accessible to community participants during the participatory design events and to the general public once all layers of the virtual model are complete. Additional equipment required for this stage includes the following: DSLR camera capable to capturing 4K video, a gimbal (to enable smooth video sequences of hallways, rooms, staircases, etc.), battery-powered floodlights (for rooms without windows/lights) and an audio recorder with external microphones to capture the sound of the building and its interactions with wind, rain and other elements of the environment. In subsequent stages of the project, LiDAR will be used to produce a digital point cloud of interior spaces which will then be merged with the 3D model of the exterior of the station in Unity software.

 

Preliminary 3D model of NAVFAC Shelburne (J. Shiga, August 2019).

Using a drone, the PI captured several hundred aerial images of the site. The photo capture was set so that there was 70% overlap between images and each image was encoded with the coordinates of the drone in three-dimensional space. The PI then used photogrammetry software to construct a point cloud and a final “meshed” model of the station. The drone photography took approximately two hours and the rendering process took 1 full day. This low-resolution model is intended to demonstrate the feasibility of producing a virtual model of a large structure and its surroundings relatively quickly. As part of the research-creation component of this project, I will revisit NAVFAC Shelburne to produce a more detailed and accurate model by flying the drone at a lower altitude and with greater overlap between images. The same process will be used to produce 3D models of two other NAVFACs.

Screen Shot 2019-10-09 at 3.09.51 PM

Precursors and inspirations for the physical interactive model: Arctic Adaptions: Nunavut at 15

Physical 3D models of former SOSUS listening stations will be produced through drone-based photogrammetry. The models will be initially be used to create 3D printed models of the stations in white which will facilitate presentation of animated information directly on the structures themselves either through video projection or through users’ phones/tablets via AR. The projected animations will be designed to engage users on multiple levels, beginning with explanations of basic technology, institutional developers/users and impacts of the sonar network to the development of more critical forms of engagement with the central issues and questions that drive this project regarding the role of sonar in shaping perceptions, representations and uses of the deep ocean. Crucial to the project’s co-creation component, the PI will work with local museums in communities located close to former SOSUS sites in Atlantic Canada to have the platform displayed for 30 days in each location prior to a public talk and future-oriented redesign workshop. To do this, the physical platform (which will consist of 1’ x 1’ 3D printed portions of the landscape and buildings that can be easily fitted together into a 4’ x 2’ model so that it can be easily shipped or transported to conferences and other events). The augmented reality (AR) component will allow users to hold their mobile phones over the model and pan, zoom, etc. on the various structures. Specific pixels in the image trigger animations that will in turn superimpose buildings, cables, and other features that no longer exist as well as archival materials, photographs and video of the interiors. The app will be developed in Augment for iOS and Android, which facilitates the integration of printed documents and multimedia materials into the 3D model. In this way, while the physical platform itself is relatively inexpensive and lightweight, it takes advantage of the modularity and variability of AR and cloud storage to deliver continuously-updated virtual layers to the user’s phone wherever the platform is installed. Crucially, the platform will also work as a knowledge-creation tool, enabling users to annotate or “tag” the AR layers of the model with sound, image or text which can then be seen/heard by subsequent visitors. The primary advantage of this combination of physical display and augmented reality is that it enables local community members’ accounts, based on oral histories and narratives of lived experience of these spaces, to be integrated into the project in a way that allows for aspects of the official archival history of SOSUS, including its many gaps, redactions and silences, to be questioned, complicated and challenged.

A key inspiration for the physical interactive model is Arctic Adaptions: Nunavut at 15 by Lateral Office, pictured below.

A6C9744F-E4B7-41FE-9FD6-BF5915E41FC9930CFE2B-EB0D-48D1-91DC-6587629D3F54

Photos: Greg Gerla

 

2016-06-05 15.14.18

Photos: John Shiga

Arctic Adaptions provides model that could be feasibly applied in the context of this project to develop a portable but durable interactive display. However, the proposed research-creation project differs in two key ways. First, the 3D models will be developed through drone-based photogrammetry (discussed above) which will in turn allow surfaces and interiors to either be displayed on the model either with projection mapping or via augmented reality. Second, the model is not the final product but rather the foundation for a participatory research-creation initiative in which “layers” are virtually added to the SOSUS model which integrate lived experiences of the social and environmental impacts of SOSUS as well as potential redesigns of the SOSUS stations which are creatively repurposed and restructured in a way that articulates and responds to the contemporary socio-environmental concerns of local communities.

The virtual, AR-driven layers of the model will be developed through Unity software which enables photography, video and LiDAR scans of room interiors to be integrated with drone-based photogrammetry of the exteriors of the buildings. On one level, the project works as a mode of preserving “acoustic ruins” which are deteriorating rapidly due to scavenging, redevelopment, weather and vandalism. However, the primary objective of the online platform is to translate the research findings about particular points of contact between sonar and ocean noise that will engage both academic and non-academic audiences in “ocean noise” from the Cold War to the present. By merging archival materials with my own photographic, video and audio recordings, the model will allow the user to see and hear how the space changes from the Cold War to the present but the project’s findings around the legacy of Cold War acoustemologies will foreground both the continuities and ruptures in the technologies, cultural practices and modes of perception that shape ocean noise then and now.

The goal is not only to provide access to the archival documents which this project has so far collected about the construction of sound and noise in SOSUS but also to enable users within and beyond the academy to engage in sonic historiography of SOSUS; this includes listening to the sounds and spaces of Cold War ocean surveillance and reflection upon the manner in which these acoustic elements shaped experience, identity, community and knowledge in relation to ocean space; sonic historiographic practice may also include application and repurposing of acoustic concepts (signal, noise, silence, echo, etc.) for detecting and problematizing patterns in archival materials (e.g., applying the sound/noise framework to understand what various publics can access, observe and interpret through barriers imposed by redactions, restrictions and over 700 abbreviations used in SOSUS documentation).

As with the physical platform discussed above, which will be displayed in and redesigned by two communities in Atlantic Canada, the virtual platform will facilitate exploration of future uses a SOSUS-like acoustic “observatory”: how might the SOSUS network be reconfigured to constitute new forms of acoustic community, knowledge and experience? This latter component will be particularly useful for collecting stakeholders’ ideas about how to develop a larger-scale installation for engaging with Canada’s role on in enacting the acoustic front or border and its “vertical territory” in the Atlantic during the Cold War. The process of developing this experimental assemblage will be presented in a white paper to cultural heritage organizations and the Directorate of History and Heritage (DHH) outlining the value of material artifacts from the NAVFACs in Canada as sites of cultural memory and as platforms for public engagement with the audible history of military ocean sensing.

Further, the research-creation methodology of this project may be applied to other projects in critical studies of media infrastructure whereby the exploration, documentation and redesign of infrastructures aims not so much to preserve infrastructure “ruins” but to repurpose them as creative “diffraction apparatuses” which allow scholars and stakeholders to confront “sedimentations of past differentiations” – in this case, in the context of acoustic sensing systems – to better understand the material-discursive environment that made the acoustic enclosure of ocean space possible, and to imagine new ways of defining boundaries and shaping differences in ocean sound that might allow more open-ended and less instrumental, enclosed and extractive configurations of human activity and ocean environment to come into being (Schadler, 2019, p. 219).

Schadler, C. (2019). Enactments of a new materialist ethnography: Methodological framework and research processes. Qualitative Research, 19(2), 215-230.

Advertisement