I’m thrilled to have contributed a chapter entitled “Ping and the Material Meanings of Ocean Sound” to Nicole Starosielski and Janet Walker’s impressive volume, Sustainable Media: Critical Approaches to Media and Environment (Routledge, 2016). Go to my Publications for the full citation information and link to my chapter.
Author: John Shiga
Quebec Austerity Smells Like Tear Gas
I’m attending the Society for Cinema and Media Studies (SCMS) conference in Montreal at the moment. As I was returning to the conference hotel (Fairmont Queen Elizabeth) around 9:30 pm, hundreds of demonstrators marched peacefully down Blvd. Rene-Levesque in protest of the Quebec government’s austerity plan. Suddenly, riot police ran past me and fired a barrage of tear gas canisters into an intersection crowded with demonstrators. I could have missed something of course but it seemed to me to be a strange response to very peaceful protest, particularly given the recent news of a student protester in Quebec City who was hit in the face by a police tear gas canister. I asked a man standing close by who told me he lives here what had happened, and he advised me that the police sometimes need to respond when “protesters get too close the buildings.” Seems like a rather high price to pay for walking near buildings. Austerity smells a lot like tear gas tonight in Montreal.
Scrubbing the future with Bill C-51
One of the key themes in the literature on law and policy in network or information societies is the idea that governments and private institutions are increasingly preoccupied with preventing unwanted behaviour. In addition to the traditional mix of after-the-fact prosecution through sanctions and deterrence of future transgression through the threat of sanction, the contemporary regime of governance, what Jack Balkin calls the “national security state,” adds technologies of prediction and prevention. As anticipated in many science fiction narratives, such as Minority Report, the target of regulation is gradually shifting from past behaviour to future behaviour, from behaviour that has already occurred to behaviour that may occur. While this trend isn’t new, it is becoming more pronounced in many areas of law and policy as digital and genetic techniques and the discourses surrounding them prop up the dream of perfect prediction and prevention.
This week, Public Safety Minister Steven Blaney is asking Canadians to believe him that has looked into (or “scrubbed,” to use the Minority Report term) the future and it is marked by the rise of “radical jihadists.” Bill C-51 is packed full of preventative features, criminalizing the “promotion of terrorism” for example, which has alarmed the Privacy Commissioner and many academics looking into the proposed legislation. The idea here is that there is causal relationship between speaking about hurting or killing people and actual violence. As Blaney puts it, “The Holocaust did not begin in the gas chamber; it began with words.” A crude analogy, even by the often low rhetorical standards of Blaney and company, but Blaney at least makes his point clear: by preventing speech that could be interpreted as promoting harm to Canadians, C-51 will prevent actual harm to Canadians. This is Minority Report-style governance at its finest.
The problem that Blaney faces now is persuading Canadians that the future really is full of radical jihadists – that this threat is so great that it is worth granting tremendous new powers to the security and law enforcement organizations with reduced public oversight of those organizations. Three people died in “terror”-related attacks in Canada in 2014; while this is of course a tragic loss of life, the mobilization of new security measures in anticipation of a rising tide of jihadist terrorism in Canada likely seems ill-advised to many Canadians who know that there are far greater threats right now that aren’t being properly managed the current government. Asbestos, for example. The federal government continues to support the asbestos industry and its products despite 368 asbestos-related deaths in Canada in 2013. Terror-related deaths would need to rise by well over 300 percent in the future to even begin to compete with asbestos as a cause of death in Canada.
JS
Gmail and the engineering of choice
The secret of Google’s success isn’t so secret. As documented by countless news stories, including this documentary on CBC, the recipe for Gmail’s success includes the following:
Ingredient #1: Hot Sauce (the algorithm that ranks websites by links rather than by other measures of popularity).
Ingredient #2: the “clean” and uncluttered look of the Google search website (in comparison to Google’s competition in the 1990s) as well as many of its other services, such as Gmail.
Google CEO Eric Schmidt likes to tell the story of he learned that Google’s slogan — DON’T BE EVIL — can actually play an important role in decision-making within the company. The story goes that one day a number of high-ranking Google employees were discussing certain forms of advertising that could be incorporated into Google’s services. At one point, an engineer pounded his fist on the table and said, “That’s evil!” Schmidt says that this generated a discussion which led the group to decide that they shouldn’t make the change in advertising.
The details about what sort of changes in advertising Google was considering at this point aren’t a part of the official narrative. But it seems to me that, since then, Google has gradually waded into territory that at least one Google engineer considered to be “evil.”
Among those changes are the strange “categories” (or “tabs”) that keep popping up at the top of my Gmail inbox: Promotions and Social.
Today, I tried to removing them from my Gmail app on my Android smartphone. I consider myself to be relatively good at these things but there simply was no straightforward way of removing these “categories” from my inbox. I can go into Settings and unselect them, but there’s no “save” button. So if I unselect these categories and return to my inbox, voilà, the unselected categories are still there.
Fortunately, there are plenty of fixes to this problem, none of which seems straightforward.
My first thought as I began going through the steps to remove Promotions and Social was that Google seems to moving away from Ingredient #2 in its recipe for success.
My second thought was about an article by Ian Kerr et al. about “engineered consent” and its many uses in government and industry to persuade people to get in line with the organization’s interests and objectives.
A few examples:
1. You go to the airport and you are given the “choice” of either a backscatter x-ray (i.e., body scanner) or a pat down (which would take longer and which involves a person touching your body in such a way that likely result in a memorable but awful experience). By not making a spectacle of yourself, holding up the line, and requesting to have the “traditional” pat down, you are “volunteering” and consenting to have your body virtually stripped of clothing and inspected by someone you can’t see.
2. You call your bank and are notified that your call may be recorded. By waiting to speak to someone, you are “volunteering” and “consenting” to have your call recorded.
In both examples (and one can think of countless others), there really isn’t much of a choice. “Consent” is acquired by making one of the two choices the only realistic option for most people seeking a particular goal (e.g., catching a flight; speaking to a human).
Drawing on the literature on decision-making, Kerr et al. argue that this type of engineering of choice is becoming widespread and is accompanied by a wide variety of justificatory discourses (public health, profit, national security, etc.). It is also based on some provocative research that suggests people are not as rational as they often think they are. The “subjective value” of costs and benefits decreases the further that they occur in the future. Moreover, “losses become less bad the further away they are in time, while gains become much less good.”
In other words, when confronted with the annoyance of Gmail Promotions and Social categories taking up a quarter of my inbox screen, the rational part of me will try to weigh the benefits against the costs of this annoyance. The costs might include things like giving away personal information and other privacy implications, the screen “real estate” that these chunky categories require, and my own desire to have some semblance of control over my inbox (not to be underestimated).
One important benefit is that if I decide to allow these categories to clutter my inbox, I don’t have to spend time figuring out how to get rid of them. That’s an immediate benefit (I don’t have to spend 5 – 10 minutes searching around for a fix), which, if I’m the rational actor that rational choice theory supposes I am, I weigh against the costs of keeping these annoyances at the top of my inbox.
The trouble is that these costs, particularly in regards to the privacy implications, are mostly unknown to me right now and probably won’t affect me immediately. Thus, I find myself weighing the immediate benefit against future costs — costs that I am liable (according to Kerr et al.) to perceive as “less bad” then they really are.
So Google seems adept at engineering my choices. What this means is that Google is neither “good” nor “evil.” Google is utterly ordinary. Like thousands of other organizations around the world, Google is doing whatever it can to make control seem like choice.
JS
A week of conflict in the global struggle over copyright
January 18-26 was a very busy week (well, eight days) for those of us following copyright reforms around the world. In just eight days, there were at least three widely-publicized conflicts between copyright owners, Internet firms and copyright reform activities. Here are three piracy stores that caught the attention of most major news outlets around the world:
January 18: Large copyright owners were disappointed when a pair of proposed anti-piracy laws in the U.S. became the target of an online “blackout” protest by Google, Wikipedia and other websites. The House of Representatives’ Stop Online Piracy Act (SOPA) and the Senate’s Protect Intellectual Property Act (PIPA) were hailed by copyright owners as effective tools for, among other things, eliminating the threat of “rogue” websites based in foreign countries, which are allegedly responsible for flooding the web with pirated material. Google and other opponents of the legislation largely succeeded in framing the proposed legislation in terms of censorship, and U.S. politicians were soon clamoring for a chance to show the media and the public how opposed they are to the legislation. The protest was remarkably successful, leading the Senate and Congress to postpone debate and discussion until the bills are amended to address the concerns raised by critics.
January 23: Copyright owners won a minor victory when New Zealand authorities arrested Kim Dotcom, the founder of cyber-locker MegaUpload. The arrest demonstrates that U.S. copyright owners appear to be able to mobilize police forces far beyond the United States. The arrest also seems designed to “send a message” that cyber-lockers or cloud storage sites are not immune to anti-piracy policing. This is also a test case for New Zealand’s new copyright legislation, which provides stronger protection for copyright by treating infringement as criminal activity. But it is a minor victory in the sense that there are many other similar sites which are still in operation and which will quickly fill the gap left by MegaUpload.
January 26: The Anti-Counterfeiting Trade Agreement (ACTA) — a proposed international agreement designed to clamp down on the global circulation of pirated and counterfeited goods — was met with opposition in Poland, where thousands took to the streets in protest. As Michael Geist notes, ACTA’s provisions for digital locks and its criminal sanctions for non-commercial infringement suggest that ACTA extends elements of the notorious U.S. Digital Millennium Copyright Act (DMCA) to the international level.
What does this series of events suggest about the ongoing struggle over copyright reform?
For many years, copyright owners have lobbied governments around the world for national legislation and international agreements which suit the interest of owners, and these efforts were extremely productive in the 1990s. Key 1990s international agreements such as the WTO Trade-Related Aspects of Intellectual Property Rights agreement, the WIPO Copyright Treaty, as well as national legislation such as the DMCA, all catered to the interests of copyright owners in “stronger” protection of intellectual property.
The online and offline protests, as well as the considerable news coverage devoted to them, suggests that copyright owners are finding it difficult to dominate lobbying and public debate about copyright. Scholars, activists and journalists can take some of the credit for raising public awareness of what is actually at stake in this formerly obscure area of law. But the “game-changer” appears to be the rapid expansion of Internet firms like Google, and their ability and willingness to use the many means at their disposal to shift public opinion on copyright reform.
In my view, the delay of SOPA and PIPA is largely the result of Internet firms’ recognition of their shared economic interests in distancing themselves from overly-protective copyright regimes. In this context, copyright owners needed a small fish to fry, and MegaUpload (for which there are many legitimate uses and users) appeared to fit the bill.
John Shiga
Goodbye Big Four, Hello Big Three
Yesterday, Universal Music Group won one of the largest items ever to be auctioned in the music business: EMI’s recorded music assets, including works by the Beatles, the Beach Boys, Cold Play, Pink Floyd, Radiohead, and most of the Motown catalogue. The price tag: $1.9 billion USD.
Meanwhile, a Sony-led consortium purchased EMI’s music publishing division for $2.2 billion USD.
Regulatory approval of the two deals may take another year, and could lead to significant changes in the way that EMI is being split up and sold off.
But Universal is already busy managing public perceptions of its latest acquisition. Mick Jagger was glad that Universal, which he described as “people who really do have music in their blood,” now controls much of the Rolling Stones’ catalogue. And according to Cold Play manager, Dave Holmes, “This can only be positive for the artists and executives at EMI.” These comments suggest that, compared with EMI’s previous owners (Terra Firma, a private equity group, and Citigroup), Universal will provide a more musician-friendly environment.
So far, news reports aren’t giving a very clear picture of possible negative consequences of these deals. The only criticism appearing in the coverage at this point – that the deals are “sad” for British music and culture – seems to miss the broader implications. According to former EMI director, Brian Southall, “It is very sad that the whole of EMI’s recorded music division has gone to Universal. There are no British record companies left to buy EMI.”
Notably absent in the news coverage thus far is any sense of how the deal sets the stage for an unprecedented concentration of ownership in the music industry.
Scholars, music critics, musicians and fans have long decried concentrated ownership in the music business. Until the 1990s, the music industry was largely controlled by the Big Six: BMG, EMI, PolyGram, Sony, Universal and WEA. In the 1990s, Seagram (the Canada-based liquor company) purchased Universal and PolyGram and the Big Six became the Big Five. In 2004, Sony Music merged with BMG to become Sony BMG, leaving control of the music industry in the hands of the Big Four.
This week, we’ve seen another step towards what might be described as a virtual monopoly. EMI’s absorption into Universal and Sony puts much of the global music industry in the control of the Big Three. If approved by regulators, three corporations – Sony, Universal and Warner – will control approximately 80% of the music industry worldwide. (The numbers vary from country to country. In Canada, the combined market share of Universal, Sony, Warner and EMI in 2010 was 80.38%, according to a Nielsen/Billboard report.)
Perhaps in times of recession, consolidations on this scale seem unremarkable. Perhaps extremely concentrated economic power in media and culture has come to seem normal. It will be interesting to see how this story is covered in the next few days, but so far, nothing about this situation appears to be particularly worrying for commentators and reporters.
John Shiga
Post-Riot Gear: Social Media and the Rise of Peer-to-Peer Policing
Most news stories about law and digital media are newsworthy fit one of the two following narrative frames: (1) digital media are empowering people to overcome legal constraints; or, (2) digital media are becoming tools of social control. What I find interesting about the news coverage of the Vancouver riots is that it doesn’t fit neatly into either of these narrative frames. Instead, the story has turned into a public debate about social media as tools of control, which are themselves out of control.
When tools of control spin out of control
A few journalists have entertained the possibility that, instead of fearing public shaming via social media, the desire for a few seconds of Internet fame may have motivated some revelers to riot. As Margaret Wente wrote in her column last week,
Mark Leiren-Young, a writer for The Tyee, has a different view. He thinks this riot was a new phenomenon, one in which the presence of the social media actually egged on the rioters. After all, if you can’t be famous on TV, then at least you can be famous on Twitter and Facebook, even at the cost of possible arrest.
It’s a long shot, but it’s possible that social media (or the desire for social media notoriety) encouraged some rioters to believe that smashing windows, burning cars, etc. would translate into online fame. None of the news outlets I’ve been following have suggested that social media facilitated the planning of the riots.
The story thus far does not fit the “digital media are empowering people to overcome legal constraints.”
What has emerged instead is a story about digital media as instruments of control. But with a twist ending.
The Vancouver police have had no trouble enlisting citizens in the identification of rioters on the police department’s Facebook page. The department did well to update the page fairly quickly with a thread devoted to citizens photos of the rioters. However, the police quickly realized (as have some legal scholars) that the feeling of empowerment associated with social media combined with the demonization of the rioters in the national news media created the perfect conditions for online vigilantism.
The Vancouver Police Facebook page seems unable to contain vigilante energy within official channels. That energy has instead spilled over into peer-to-peer policing — the use of social media by individuals to act out a fantasy of perfect justice where the wrongdoers (and only the wrongdoers) get exactly what they deserve.
One of the most viewed news stories on the CBC website today is a commentary by lawyer Daniel Henry on social media as an instrument of law enforcement in the Vancouver riot aftermath. Henry points out that there is a significant difference between labeling someone in a post as “criminal” and describing an act with other, broader, more subjective adjectives; the person being accused of a crime can in turn sue for defamation if the photos, videos or other evidence do not support the poster’s accusations.
Henry points out that “It’s easy to get caught up in the frenzy of getting the bad guys.” In my view, this shift from a concern about riots to a concern about “outing” the riots is where the story gets interesting. The “mob” mentality which Henry and other commentators worry about is online, not in the streets.
But for such commentators, social media have no impact on the communication of what we might call “shame messaging.” Social media are a neutral tool which convey sentiments but which do not provoke or shape those sentiments and their communication.
It’s time to consider another possibility. The bigger picture in this case might be that social media, especially their photo-sharing capabilities, provide users with a seemingly immediate, direct and appropriate response to this particular public threat. In other words, users do not participate in this kind of photo-shaming simply to help the legal system do its job. There is much less deference toward the law among the photo-sharers than police might have expected. Photo-sharing allows alleged rioters to be punished right now. The “swift justice” of online shaming is possible precisely because it is able to operate outside of the legal process. Among the problems raised by the “outing” of rioter online is that social media favours speed over all other considerations, including proportionality in its administration of justice.
Why do people use social media to “out” other people — and why do they do it with such gusto?
It’s plausible that social media vigilantism is the predictable response of an enraged public accustomed to Facebook and Twitter as a mode of self-expression and engagement with public issues. But it’s also important to consider the possibility there is more to this flurry of accusatory photo-sharing than a sense of civic duty.
Sarah Kember, in her 1998 book, Virtual Anxiety, draws on a long line of theorists who have attempted to explain the cultural obsession with photographic images and argues that photographs are highly fetishized objects, and that this the proliferation of imaging technologies in the legal system and other modes of social control is a manifestation of this fetishization of photographs as truth. Photographs are often treated as “tokens” or “trophies” of those whom they depict. In the case of individuals or groups who are regarded as threats to social order, photographic images can give the photographer and the viewer the sense that they know truth of the photographic subject (e.g., isolating and naming the individual in the crowd) which in turn provides a sense of control over the threatening person or group.
From a legal perspective, the use of images of the rioters on Facebook, blogs and other social media should be supported in so far as users are “facilitating justice,” as Daniel Henry puts it, “not working on substitute remedies.” But the point Kember is making is that photographs often perform a “substitute remedy” because the act of photographing is bound up with a desire to control the subject depicted. Photography renders the fleeting (and in this case, fleeing) subject in a form that seems durable, discrete, and fixed. The threat is reduced at least temporarily because the threatening person becomes an object which cannot look back at the viewer.
Vigilante uses of images and videos are an extreme (perhaps exaggerated) form of this urge to “capture” threatening subjects in photographic fetishism. Vigilante uses of images and videos in social media stem as well from the tremendous truth-value that law invests in photographic images.
The problem, as Kember points out, is that “Fetishism is always an inadequate and unstable means of control precisely because it is a compensatory mechanism” (p. 6). In other words, the more the act of photographing and displaying photographic images is invested with an almost magical ability to capture, punish and restore order (what Henry calls “substitute remedies”), the more that those images become reminders of what is actually lost. Henry, for instance, sees a very real possibility that the vigilante “frenzy” will further undermine “civilized democracy.”
In this way the technologies of control — in this case social media and digital images — can become intertwined with the threats which they are supposed to contain and neutralize. Digital cameras and social media — the new post-riot gear — are thus imagined to be as much of a threat to law and order as the riot itself.
John Shiga
Virtual Production, Motion Capture and the Return of the Film Auteur
Everything that was once considered to be integral to film production – sets, make-up, costumes, cameras, etc. – is called “physical production” in the digital effects industry. Many blockbuster films, including Avatar (2009), the highest grossing film in history, are now produced without physical production. These films are the result of what is called “virtual production,” and this development is transforming notions of creativity and creative labour in the film industry.
Virtual production techniques are supposed to reduce costs and make film production more efficient. But at the moment, virtual production means adding more people and technology (and a wider diversity of technologies and specialists who know how to use them). Virtual production expands the scale of film production, and increases the complexity of the industrial apparatus that is commercial filmmaking.
Virtual production is thus an unlikely place for the return of the film auteur — or the notion of the film as a work of individual authorship — but it is here, in the convergence of cinema and 3D imaging, where the auteur is making its 21st century comeback.
Earlier this week, at the Interacting With Immersive Worlds Conference at Brock University, I had the opportunity to hear about state-of-the-art virtual production from digital effects artist, Dejan Momcilovic, who spoke about his work at the New Zealand-based effects firm WETA Digital. Momcilovic spoke about motion capture, a technique used to create the digital effects in Avatar, Lord of the Rings, King Kong, among many other well-known films.
Motion capture is the process of measuring and recording movement in a form that can be processed by computers. Whereas cinematic techniques record the appearance of movement, motion capture extracts information about movement. It’s a method of analyzing movement, which is why, for a long time, motion capture was associated more with rehabilitation medicine than with entertainment.
The video game industry changed all that. After motion capture was used extensively and successfully in action and sports games (Rockstar’s L.A. Noire is probably the most well-known recent example of motion capture in video game production), the film industry began experimenting with it too.
Even if you’ve never heard of motion capture, you’ve probably seen “making of” videos or stills in news articles of actors with dots (or “markers” as they’re called in the industry) placed on their faces. Several cameras (or several dozen, in WETA’s top-of-the-line setup) record the movement of the markers. More detailed and subtle movement require more markers, so typically, the majority of the markers are placed on the actor’s face.
Once motion capture has been used to record a performance in a scene, effects artists use the motion data to make computer-generated bodies move. Bodies that are entirely computer-generated thus have the appearance of live-action filming.
Integrated with other digital effects techniques, motion capture allows directors to film other-worldly characters and environments “as if” they were actually in front of the camera. In virtual production, the characters and the environment of the film can be rendered first and then, using a virtual camera, the director can “film” the scene.
Criticism of motion capture has come mainly from traditional animators who were highly suspicious of this new technique of bridging live-action and animation. As Maureen Furniss writes in her excellent overview these debates, traditional animators saw motion capture as a shortcut around animation production work.
Judging from Momcilovic’s calm and collected manner of speaking about WETA’s digital capture work, today’s motion capture experts no longer see themselves as struggling to meet the artistic standards set by traditional animation.
So it seems like a pretty happy story. Motion capture gives digital effects artists another creative tool. And digital effects artists open up new ways of making film for actors, writers, directors and so on, unconstrained by the technical and aesthetic limitations of traditional cinema. Everyone wins. Or so it seems.
As I watched actor after actor being scanned, transformed into code and then inserted into virtual worlds, I couldn’t help but think of the 1981 film, Looker.
Looker was one of first Hollywood films to seize upon the worries about the replacement of actors by computer simulations. As is common with science-fiction thrillers, Looker revolves around a devious corporation, Digital Matrix, Inc., which, in this case, manipulates television audiences with computer-generated television advertisements. After scanning the actor’s body and producing a 3D model, Digital Matrix programs the virtual actor’s movements so as to hypnotize the viewer (the film doesn’t explain why programmed motion would hypnotize people). Since the real actors might cause trouble for the corporation, Digital Matrix kills the actors/models after digitizing them.
Although Looker is an all-too-familiar tale about the power of media to manipulate the weak and vulnerable audience, it does provide an early commentary on the way 3D capture and digital rendering techniques encourage a view of actors’ bodies and actions as “raw material.”
As with most media impact narratives, which focus on the consequences of technology on society and culture, Looker oversimplifies the relationships between digital media, cinema, and the broader history of techniques for recording movement. Histories of cinema also tend to oversimplify the technological and cultural history of cinema. While the techniques of documenting motion and producing the illusion of motion are at least as old as chronophotography (Leland Stanford and Etienne-Jules Marey’s horse locomotion experiment is probably the most well-known example), cinema was not the inevitable outcome of photography. Animation techniques, for instance, could have led to a very different mode of producing film along with a very different cultural forms of cinema.
But chronophotography comes out of the same obsession with breaking motion down to its basic, mathematic description as motion capture does a century and a half later. Marey’s delight in shifting between media, as demonstrated in his Analysis of the Flight of a Seagull (1887) sculpture, also suggests that the notion of motion as something that can be extracted from a body and shifted into another has been around much longer than contemporary motion capture techniques.
One of the key differences between motion capture and traditional cinematic techniques is that motion capture seeks to render 3D models based on “deep” (rather than “surface-level”) scans of bodies in motion. Digital effects firms like WETA produce 3D models with physiological and anatomical depth, scanning not just the surface of the body but also the body’s “interior,” including muscles, tendons, and bones.
As Momcilovic noted in his talk, some of the actors in WETA’s productions have undergone MRI scanning to make their 3D renderings of movement as realistic as possible. Although MRI may not be a standard component of motion capture yet, it does suggest that the overall trajectory in this area of visual effects is toward “deep” scanning of bodies.
Information from MRI and other kinds of scanning allow the production of virtual models of the actor which perform gestures and movements and respond to similar conditions and events (e.g., impacts) as the actor’s biological body. Motion capture today effectively enables bio-mechanical reconstruction of the actor’s body in digital code.
Looker might have missed the manner in which digital techniques would be integrated into acting and film production more generally, but it does anticipate changes in creative labour facilitated by 3D motion capture and other virtual production techniques.
There are at least two ways of looking at the impact of virtual production on creative labour.
On the one hand, labour that was previously considered “merely” technical is now considered to be creative. Visual effects artists like Momcilovic are increasingly recognized as creative contributors to the film’s aesthetic quality (as well as to its marketability).
On the other hand, motion capture reinforces long-standing hierarchies of labour in the film industry. Momcilovic noted that virtual production gives directors like James Cameron unprecedented control over the entire fictional world of the film. All Cameron needs to do is say, “make that hill bigger,” and the designers make it happen by altering the virtual landscape.
In this way, virtual production, as innovative as it may be, reinforces a very traditional sense of the film as the artistic expression of its director, reminiscent of auteur theory. Even effects artists sometimes regard the film they are working on as the director’s work of art. As Momcilovic put it, “Every decision on Avatar would at least go by [Cameron] and he’d have something to say about it … In a way, it was like watching a genius do what he does.”
Virtual production, as it is currently practiced by filmmakers and interpreted by critics and at least some effects artists, bolsters the sense (however distorted it may be) that films like Avator spring forth from the inspired mind of the director. The actors’ contributions, and those of the effects artists, are thus placed in the background.
Rather than “replacement by computers,” as predicted by Looker, motion capture and the broader array of techniques that enable the virtualization of production are bringing about some very real shifts in creative labour and in the way audiences interpret films produced in this way. Unfortunately, in the short term at least, these shifts appear to be backward, to the 1950s to be precise, when auteur theory, or the notion that the film is the personal expression of its director, became popular among film scholars and critics.
How can the return of the film auteur in the context of an increasingly computerized mode of production be explained?
One possibility is consider the way digital techniques are linked to other artistic practices. It may be, as Lev Manovich suggests in his 2001 book, The Language of New media, that digital cinema is more like painting than photography. As Manovich writes,
“The manual construction of images in digital cinema represents a return to the pro-cinematic practices of the nineteenth century, when images were hand-painted and hand animated. At the turn of the twentieth century, cinema was to delegate these manual techniques to animation and define itself as a recording medium. As cinema enters the digital age, these techniques are again becoming commonplace in the filmmaking process. Consequently, cinema can no longer be clearly distinguished from animation. It is no longer an indexical media technology but, rather, a subgenre of painting.” (p. 295)
What Manovich didn’t anticipate was that techniques like motion capture would be used to bridge animation and live-action. Nevertheless, his suggestion that digital techniques allow filmmakers to manually construct (or program) every detail of the cinematic image, much like a painting, provides a partial explanation for the appeal of the auteur theory in this context. Motion capture is industrially organized and represented as a way for the director to have more control over each “brush stroke” of the cinematic “painting.”
As film production becomes increasingly complex, notions of cinema-as-art may become more simplistic. While the notion of the director as the lone genius toiling away to make the magic happen seems hopelessly out of date, it is precisely that idea which seems to be gaining currency among those who are closest to the action in virtual production.
John Shiga
Blogs: A Self-Motivating Medium?
After having a few years to think about blogging, I’m finally ready to give it a try. But as a researcher and instructor in communication studies, I’m writing a lot anyway. So why would I add this blog to the list of things to write, especially since, according to a Pew study conducted last year, the perceived importance of blogs relative to other media is declining, at least among younger Internet users. So, why blog?
One might find inspiration in the idea that, as a form of social media, blogs are contributing to the transformation of media production, especially news production. For some writers, like clay shirky, bloggers (along with micro-bloggers and Facebook users) are a key part of the vanguard of the social media revolution.
And shirky has many examples of blogging and other forms of digital content creation which had a significant impact on politics and the news media.
For proponents of the view, which could be called “social media optimism,” social media have a democratizing effect on societies, and it’s clear that blogs are helping to open up journalism and other types of media production to “everybody” (i.e., non-professionals).
Social media optimists often point to the Memogate scandal in 2004 as an example of the way blogging can effectively challenge mainstream news media in the U.S. There is no shortage of more recent examples of bloggers who have successfully used blogs to challenge the official media in authoritarian regimes (and there are many such regimes around the world). This includes the Iranian-Canadian blogger, Hossein Derakhshan, as well as Hamza Shargabi in Yemen.
In all of theses cases, blogging was an important source of information about political conflict and worked as a catalyst for organized political action.
For those who believe that (1) social media are inherently more democratic than mass media and that (2) a more democratic media system will inevitably lead to a more democratic society, these blog success stories seem to be early indications of a major historical shift in media and politics.
But according to other media theorists, the impact of blogging on politics and the mainstream media tends to be vastly overestimated. Jodi Dean, for example, argues in her 2010 book, Blog Theory, that blogging is a part of a fantasy of “communicative capitalism” in which bloggers believe that “circulating messages” (posting, sharing, etc.) is an effective form of political participation. Political and economic elites are happy to let bloggers think along those lines because blogging diverts dissent safely away into cyberspace.
It’s a rather somber view. But after a decade of blog-topian rhetoric, Dean is correct to point out that it’s time to critically evaluate what blogs actually do in particular political, social and cultural conditions rather than assuming they are agents of democratization.
Setting aside the “big picture” issues of blogging for a moment, there are perhaps some less revolutionary but nonetheless important motivations for blogging.
Most bloggers can probably answer the question, “Why blog?” without hesitation. They do it because they enjoy it.
For cognitive psychologists, enjoyment is an “intrinsic motivation.” An intrinsic motivation comes from the activity itself. If I enjoy blogging because it gives me some degree of pleasure or gratification, then my blogging is intrinsically motivated. The engine for action comes from the activity itself. In this case, blogging is like a locomotive which propels itself and the blogger.
Extrinsic motivations come from outside the activity. These motivations may refer to outcomes of the activity, but not to the activity itself. Extrinsic motivations include obligations and responsibilities as well as the desire for money, recognition and other kinds of reward which motivate activity. The engine for action comes from other people or things. In this case, blogging rolls along, but it is being pushed (or pulled) by someone/something else. The blog and the blogger become passenger or cargo cars.
Does this give me a better sense of why I am blogging?
A bit.
Many actions have both intrinsic and extrinsic motivations, so it’s no surprise that I am writing this post because I enjoy it (so far) AND because there’s a possibility (even if it’s remote) that I will eventually get some feedback which will reinforce the value I perceive in blogging and act as an extrinsic motivation to keep me writing and posting.
The intrinsic/extrinsic dualism seems to be cognitive psychology’s way of talking abut the individual and the social levels of action. If that’s true, then there’s plenty of overlap between the two categories. How do you categorize the pleasure of writing? Is that pleasure purely “intrinsic,” or is the pleasure of writing (and the desire to write) shaped by social norms about the value of certain kinds of writing over other cultural practices?
What I’m suggesting is that the intrinsic/extrinsic dualism can gloss over the social origins of desires and gratifications. That’s a pretty significant downside to these categories. But with some tweaking, they can still be useful for understanding why bloggers blog.
In this short inquiry into the raison d’être of my blog, I read an interesting study of bloggers’ motivations. The study builds on the intrinsic/extrinsic categories but goes further by outlining how motivations change over time.
In a 2010 article in New Media & Society, Brian Ekdale, Kang Namkoong, Timothy K.F. Fung and David D. Perlmutter examined the reasons why political bloggers blog. They built on the intrinsic / extrinsic categories of motivation, but the researchers developed 13 motivations that are specific to political blogging (see below). Then they asked 154 of the top political bloggers to rate how much influence of each motivation had their blogging, using a 0 (not at all) to 10 (very much) scale. 66 of the bloggers responded and completed the survey.
Extrinsic motivations
- To provide an alternative perspective to the mainstream media
- To help society
- To inform people about the most relevant information on topics of interest
- To influence public opinion
- To help your political party or cause
- To influence mainstream media
- To serve as a political watchdog
- To inform people about the most recent information on topics of interest
- To critique mainstream media
- To critique your political opponents
Intrinsic motivations
- To formulate new ideas
- To keep track of your thoughts
- To let off steam
The researchers were interested in how the motivations for blogging change over the course of the “blogspan,” that is, the lifespan of a blog. So they asked the bloggers to give each motivation a rating for the influence it had on their initial blogging and on their current blogging.
The results of the survey are a bit surprising. The researchers expected extrinsic motivations to become stronger over time, and I can understand why. One might assume that the more time, energy and money invested into a blog, the more one will be motivated to maintain reputations, generate income, keep one’s career going, etc. It might also be assumed that such extrinsic motivations would become stronger over time particularly for writers who have made careers out of blogging, like the top political bloggers in the NM&S study.
Interestingly, the results suggest a different picture of blog motivation. All motivations (both “intrinsic” and “extrinsic”) for blogging increase in influence during the “blogspan.”
(There is one exception to this rule. What the researchers called the “Let off steam” motivation decreased over time. The more experience the bloggers acquired, the less “letting off steam” was the key motivation for blogging.)
Another interesting finding in this study is that the most significant increase in ratings between initial and current blogging occurred in the two categories of motivations related to the mainstream media:
“To influence mainstream media”
“To critique mainstream media.”
Over time, the desire to influence and critique the mainstream media became increasingly strong motivations to continue blogging. These political bloggers began with a fairly jaded attitude about the potential of blogging to affect mainstream media. One blogger told the researchers that, early on, it seemed to him/her that most political blogs were “vanity projects” dressed up as challenges to the media and political systems.
Gradually, the more that they worked on their blogs and paid attention to other blogs, the more the bloggers viewed blogging as way of contributing to alternative ways of thinking about issues in the news media and drawing attention to events and issues that are left out of mainstream news media.
The study was done on the “top bloggers.” These are bloggers who have successfully established a reputation and perhaps even a career in the political blogosphere. So, the findings of the study – that blogging increases motivation to blog over time – probably shouldn’t be generalized to all blogs or all bloggers.
But it’s short-term motivation for this post at least.
John Shiga