Research-Creation

During the Cold War, a network of underwater microphones (“hydrophones”) and listening stations in US, Canada, the Bahamas, Iceland and Wales monitored sound in the Atlantic in order to detect and track Soviet submarines. Today, some of the abandoned listening stations are still standing, though they are rapidly deteriorating. In my new research-creation project, I use photography, video and photogrammetry to create 3D models of the listening stations. The 3D models will in turn be used to create interactive physical and virtual platforms for public engagement with the social, cultural and technological elements of this once-secret undersea surveillance system. I am particularly interested in the experiences of those who once worked in these facilities as well as the accounts of those who lived nearby. How did this environment shape the way people experienced the Cold War, ocean sound and ocean space more broadly? What new forms of identity and community were developed within and between the listening “nodes” of the network? How did the construction of these listening stations alter the physical environment and impact local economies, identities and culture? In collaboration with groups with expertise in acoustic ocean sensing and groups with lived experience of the network, the project aims to virtually reconstruct the infrastructure and material meanings of the network, and imagine what new forms of community could be developed around an “audible ocean.” Below are preliminary works which explore the historiographic, narrative and aesthetic possibilities afforded by abandoned sonar infrastructure as well as various media, formats and approaches for the design of physical and virtual models of these Cold War “ruins.”

Preliminary photo shoot of United States Navy Facility Shelburne, Shelburne, Nova Scotia (J. Shiga, July 2018 and August 2019).

The images in this collection, produced by the PI on an iPhone, are intended to provide a sense of current physical condition of the interiors of NAVFACs as well as the historiographic, aesthetic and narrative possibilities afforded by the site’s location, the encroachment of vegetation, and traces of activity before and after decommissioning. The research-creation component would entail revisiting the NAVFAC with professional video, audio and photography equipment to generate content that will be accessible to community participants during the participatory design events and to the general public once all layers of the virtual model are complete. Additional equipment required for this stage includes the following: DSLR camera capable to capturing 4K video, a gimbal (to enable smooth video sequences of hallways, rooms, staircases, etc.), battery-powered floodlights (for rooms without windows/lights) and an audio recorder with external microphones to capture the sound of the building and its interactions with wind, rain and other elements of the environment. In subsequent stages of the project, LiDAR will be used to produce a digital point cloud of interior spaces which will then be merged with the 3D model of the exterior of the station in Unity software.

 

Preliminary 3D model of NAVFAC Shelburne (J. Shiga, August 2019).

Using a drone, the PI captured several hundred aerial images of the site. The photo capture was set so that there was 70% overlap between images and each image was encoded with the coordinates of the drone in three-dimensional space. The PI then used photogrammetry software to construct a point cloud and a final “meshed” model of the station. The drone photography took approximately two hours and the rendering process took 1 full day. This low-resolution model is intended to demonstrate the feasibility of producing a virtual model of a large structure and its surroundings relatively quickly. As part of the research-creation component of this project, I will revisit NAVFAC Shelburne to produce a more detailed and accurate model by flying the drone at a lower altitude and with greater overlap between images. The same process will be used to produce 3D models of two other NAVFACs.

Screen Shot 2019-10-09 at 3.09.51 PM

Precursors and inspirations for the physical interactive model: Arctic Adaptions: Nunavut at 15

Physical 3D models of former SOSUS listening stations will be produced through drone-based photogrammetry. The models will be initially be used to create 3D printed models of the stations in white which will facilitate presentation of animated information directly on the structures themselves either through video projection or through users’ phones/tablets via AR. The projected animations will be designed to engage users on multiple levels, beginning with explanations of basic technology, institutional developers/users and impacts of the sonar network to the development of more critical forms of engagement with the central issues and questions that drive this project regarding the role of sonar in shaping perceptions, representations and uses of the deep ocean. Crucial to the project’s co-creation component, the PI will work with local museums in communities located close to former SOSUS sites in Atlantic Canada to have the platform displayed for 30 days in each location prior to a public talk and future-oriented redesign workshop. To do this, the physical platform (which will consist of 1’ x 1’ 3D printed portions of the landscape and buildings that can be easily fitted together into a 4’ x 2’ model so that it can be easily shipped or transported to conferences and other events). The augmented reality (AR) component will allow users to hold their mobile phones over the model and pan, zoom, etc. on the various structures. Specific pixels in the image trigger animations that will in turn superimpose buildings, cables, and other features that no longer exist as well as archival materials, photographs and video of the interiors. The app will be developed in Augment for iOS and Android, which facilitates the integration of printed documents and multimedia materials into the 3D model. In this way, while the physical platform itself is relatively inexpensive and lightweight, it takes advantage of the modularity and variability of AR and cloud storage to deliver continuously-updated virtual layers to the user’s phone wherever the platform is installed. Crucially, the platform will also work as a knowledge-creation tool, enabling users to annotate or “tag” the AR layers of the model with sound, image or text which can then be seen/heard by subsequent visitors. The primary advantage of this combination of physical display and augmented reality is that it enables local community members’ accounts, based on oral histories and narratives of lived experience of these spaces, to be integrated into the project in a way that allows for aspects of the official archival history of SOSUS, including its many gaps, redactions and silences, to be questioned, complicated and challenged.

A key inspiration for the physical interactive model is Arctic Adaptions: Nunavut at 15 by Lateral Office, pictured below.

A6C9744F-E4B7-41FE-9FD6-BF5915E41FC9930CFE2B-EB0D-48D1-91DC-6587629D3F54880C440A-7B39-4204-A8D5-4D02870DCE2B78DCA650-3EC0-4304-9AF9-287B9878B8458F1C098E-4C80-4A79-95A9-963A65BF3288

Photos: Greg Gerla

 

2016-06-05 15.14.262016-06-05 15.14.182016-06-05 15.22.20

Photos: John Shiga

Arctic Adaptions provides model that could be feasibly applied in the context of this project to develop a portable but durable interactive display. However, the proposed research-creation project differs in two key ways. First, the 3D models will be developed through drone-based photogrammetry (discussed above) which will in turn allow surfaces and interiors to either be displayed on the model either with projection mapping or via augmented reality. Second, the model is not the final product but rather the foundation for a participatory research-creation initiative in which “layers” are virtually added to the SOSUS model which integrate lived experiences of the social and environmental impacts of SOSUS as well as potential redesigns of the SOSUS stations which are creatively repurposed and restructured in a way that articulates and responds to the contemporary socio-environmental concerns of local communities.

The virtual, AR-driven layers of the model will be developed through Unity software which enables photography, video and LiDAR scans of room interiors to be integrated with drone-based photogrammetry of the exteriors of the buildings. On one level, the project works as a mode of preserving “acoustic ruins” which are deteriorating rapidly due to scavenging, redevelopment, weather and vandalism. However, the primary objective of the online platform is to translate the research findings about particular points of contact between sonar and ocean noise that will engage both academic and non-academic audiences in “ocean noise” from the Cold War to the present. By merging archival materials with my own photographic, video and audio recordings, the model will allow the user to see and hear how the space changes from the Cold War to the present but the project’s findings around the legacy of Cold War acoustemologies will foreground both the continuities and ruptures in the technologies, cultural practices and modes of perception that shape ocean noise then and now.

The goal is not only to provide access to the archival documents which this project has so far collected about the construction of sound and noise in SOSUS but also to enable users within and beyond the academy to engage in sonic historiography of SOSUS; this includes listening to the sounds and spaces of Cold War ocean surveillance and reflection upon the manner in which these acoustic elements shaped experience, identity, community and knowledge in relation to ocean space; sonic historiographic practice may also include application and repurposing of acoustic concepts (signal, noise, silence, echo, etc.) for detecting and problematizing patterns in archival materials (e.g., applying the sound/noise framework to understand what various publics can access, observe and interpret through barriers imposed by redactions, restrictions and over 700 abbreviations used in SOSUS documentation).

As with the physical platform discussed above, which will be displayed in and redesigned by two communities in Atlantic Canada, the virtual platform will facilitate exploration of future uses a SOSUS-like acoustic “observatory”: how might the SOSUS network be reconfigured to constitute new forms of acoustic community, knowledge and experience? This latter component will be particularly useful for collecting stakeholders’ ideas about how to develop a larger-scale installation for engaging with Canada’s role on in enacting the acoustic front or border and its “vertical territory” in the Atlantic during the Cold War. The process of developing this experimental assemblage will be presented in a white paper to cultural heritage organizations and the Directorate of History and Heritage (DHH) outlining the value of material artifacts from the NAVFACs in Canada as sites of cultural memory and as platforms for public engagement with the audible history of military ocean sensing.

Further, the research-creation methodology of this project may be applied to other projects in critical studies of media infrastructure whereby the exploration, documentation and redesign of infrastructures aims not so much to preserve infrastructure “ruins” but to repurpose them as creative “diffraction apparatuses” which allow scholars and stakeholders to confront “sedimentations of past differentiations” – in this case, in the context of acoustic sensing systems – to better understand the material-discursive environment that made the acoustic enclosure of ocean space possible, and to imagine new ways of defining boundaries and shaping differences in ocean sound that might allow more open-ended and less instrumental, enclosed and extractive configurations of human activity and ocean environment to come into being (Schadler, 2019, p. 219).

Schadler, C. (2019). Enactments of a new materialist ethnography: Methodological framework and research processes. Qualitative Research, 19(2), 215-230.