The secret of Google’s success isn’t so secret. As documented by countless news stories, including this documentary on CBC, the recipe for Gmail’s success includes the following:
Ingredient #1: Hot Sauce (the algorithm that ranks websites by links rather than by other measures of popularity).
Ingredient #2: the “clean” and uncluttered look of the Google search website (in comparison to Google’s competition in the 1990s) as well as many of its other services, such as Gmail.
Google CEO Eric Schmidt likes to tell the story of he learned that Google’s slogan — DON’T BE EVIL — can actually play an important role in decision-making within the company. The story goes that one day a number of high-ranking Google employees were discussing certain forms of advertising that could be incorporated into Google’s services. At one point, an engineer pounded his fist on the table and said, “That’s evil!” Schmidt says that this generated a discussion which led the group to decide that they shouldn’t make the change in advertising.
The details about what sort of changes in advertising Google was considering at this point aren’t a part of the official narrative. But it seems to me that, since then, Google has gradually waded into territory that at least one Google engineer considered to be “evil.”
Among those changes are the strange “categories” (or “tabs”) that keep popping up at the top of my Gmail inbox: Promotions and Social.
Today, I tried to removing them from my Gmail app on my Android smartphone. I consider myself to be relatively good at these things but there simply was no straightforward way of removing these “categories” from my inbox. I can go into Settings and unselect them, but there’s no “save” button. So if I unselect these categories and return to my inbox, voilà, the unselected categories are still there.
Fortunately, there are plenty of fixes to this problem, none of which seems straightforward.
My first thought as I began going through the steps to remove Promotions and Social was that Google seems to moving away from Ingredient #2 in its recipe for success.
My second thought was about an article by Ian Kerr et al. about “engineered consent” and its many uses in government and industry to persuade people to get in line with the organization’s interests and objectives.
A few examples:
1. You go to the airport and you are given the “choice” of either a backscatter x-ray (i.e., body scanner) or a pat down (which would take longer and which involves a person touching your body in such a way that likely result in a memorable but awful experience). By not making a spectacle of yourself, holding up the line, and requesting to have the “traditional” pat down, you are “volunteering” and consenting to have your body virtually stripped of clothing and inspected by someone you can’t see.
2. You call your bank and are notified that your call may be recorded. By waiting to speak to someone, you are “volunteering” and “consenting” to have your call recorded.
In both examples (and one can think of countless others), there really isn’t much of a choice. “Consent” is acquired by making one of the two choices the only realistic option for most people seeking a particular goal (e.g., catching a flight; speaking to a human).
Drawing on the literature on decision-making, Kerr et al. argue that this type of engineering of choice is becoming widespread and is accompanied by a wide variety of justificatory discourses (public health, profit, national security, etc.). It is also based on some provocative research that suggests people are not as rational as they often think they are. The “subjective value” of costs and benefits decreases the further that they occur in the future. Moreover, “losses become less bad the further away they are in time, while gains become much less good.”
In other words, when confronted with the annoyance of Gmail Promotions and Social categories taking up a quarter of my inbox screen, the rational part of me will try to weigh the benefits against the costs of this annoyance. The costs might include things like giving away personal information and other privacy implications, the screen “real estate” that these chunky categories require, and my own desire to have some semblance of control over my inbox (not to be underestimated).
One important benefit is that if I decide to allow these categories to clutter my inbox, I don’t have to spend time figuring out how to get rid of them. That’s an immediate benefit (I don’t have to spend 5 – 10 minutes searching around for a fix), which, if I’m the rational actor that rational choice theory supposes I am, I weigh against the costs of keeping these annoyances at the top of my inbox.
The trouble is that these costs, particularly in regards to the privacy implications, are mostly unknown to me right now and probably won’t affect me immediately. Thus, I find myself weighing the immediate benefit against future costs — costs that I am liable (according to Kerr et al.) to perceive as “less bad” then they really are.
So Google seems adept at engineering my choices. What this means is that Google is neither “good” nor “evil.” Google is utterly ordinary. Like thousands of other organizations around the world, Google is doing whatever it can to make control seem like choice.