Showing posts with label augmented reality. Show all posts
Showing posts with label augmented reality. Show all posts

20130617

The Right to Be Forgotten

The recent revelations of a set of massive and longstanding NSA surveillance programs have prompted blizzard of accusations, defenses, and recriminations from across the political spectrum, PRISM and related programs have been called everything from “all the infrastructure a tyrant could need” to a vital component of national security, while leaker Edward Snowden has been called everything from a hero to a traitor. A week later, the pieces have yet to settle. But what’s been bothering me in all this is the confusion of ideas about state power, civil liberties, surveillance, computer security, and privacy. It bothers me, because without clear ideas we cannot have clear policies, and without clear policies liberty, security, and any other desired public good are achieved by accident.

PRISM is not strictly speaking surveillance.  It looks like surveillance, it feels like surveillance, but it lacks the main purpose of surveillance: creating a disciplinary power relationship.  When scholars talk about surveillance in a rigorous sense, they’re mostly talking about Foucault’s theory of the Panopticon. The original Panopticon was a plan for a prison, with the cells arranged so that a single guard could watch all the prisoners, and dispatch punishments and rewards as appropriate to the prisoner’s behavior.  Eventually, according to Foucault, the desires of the warden would be internalized by the prisoners, and they would behave as planned. They would be disciplined.

Foucault’s genius was noting that the architecture of the Panopticon allowed the mechanisms of power to operate at very little cost, because prisoners could not tell when they were under observation, and so would always have to behave as if they were being watched. Additionally, panopticonic structures were everywhere: in classrooms, hospitals, urban renewal of medieval districts into broad boulevards, even the bureaucratic organization of the modern state into administrative districts and statistical agencies, to the point that a scholar describing yet another panopticon is met with a sigh and shrug.

As Whitney Boesel of Cyborgology noted, when we look for a disciplinary purpose in these NSA programs, we find nothing. Despite editorials in the New York Times and onThinkProgress with a Foucault 101 explanation of the panoptiocn, and gestures towards the chilling effect and future potential harms, it’s difficult to point to any specific thought or speech act that someone did not have as a consequence of the potential that they might be added to an NSA database. People appear to be totally free to say and think whatever they want online, including espousing flatly anti-democratic opinions across the political spectrum. These programs are no more panopticonic than 20thcentury statecraft in general.

Privacy has a totemic value in American political discourse, but privacy as a concept is fuzzy at best. Philosophically, my colleague Jathan Sadowski describes privacy as “that which allows authentic personal growth,” a kind of antithesis to the disciplining and shaping of the panopticon. Legally, American privacy originates in a penumbra of rights defined in the 4th Amendment (protection from arbitrary search and seizure), 9thAmendment (other, unspecified rights), and 14th Amendment (right to due process). Privacy has further become established as part of the justification for reproductive freedom in Griswold v Connecticut and Roe v Wade, loading it with all the baggage of the culture war.

But what is privacy, really? Future Supreme Court Justice Louis Brandeis, in an influential 1890 essay, described it as “the right to be left alone.” Brandeis’s essay was published in the context of an intrusive popular press using the then-new technology of instant photography to violate the privacy of New York society members. Brandeis extended the basic right of a person to limit the expression of their “thoughts, sentiments, and emotions” into a fundamental divide of public and private spaces. Since then, mainstream legal thought has attempted to apply Brandeis’s theory of privacy to new technology and new concerns, with varying degrees of success.

Brandeis metaphor breaks down in the face of Big Data because Brandeis was concerned with the gradations of privacy in space (it is acceptable to be photographed on the red carpet at a premier, unacceptable on your doorstep, totally illegal inside your home), and computers and data are profoundly non-spatial. There is no “Cyberspace.” That’s an idea cribbed from a science-fiction book written by a man who’d never seen a computer. Spatial metaphors fundamentally fail to capture what computers are doing. Computers are, mathematically speaking, devices that turn numbers into other numbers according to certain rules. These days, we use computers for lots of things: science, entertainment, but mostly accounting and communication. And for the latter two uses, the phrase “my personal data” (which inspires so much angst) confuses personal to mean both “about a person” and “belonging to a person.”

Advocates of strict privacy control tend to confuse the two. Privacy is contextual, social, and promethean, so I’d like to analyze something concrete instead: secrets.  A secret is something that a person or a small group knows, which they do not want other people to know. Most “personal data” is actually part of a transaction, whether you’re buying a stick of gum at the gas station or looking at pictures stored on a remote server. We’re free to keep records of your side of the transaction, yet we’re outraged when the other side keeps records as well. We could ask the other side of the transaction to delete the records, or not share them, but at its strongest this is a normative approach. There’s no force behind it.

Moving from the normative ‘ought’ to ‘is’ requires a technological fix. Physical privacy is important, but walls and screens are far more sure than the averted gaze. It’s wrong to steal, and valuable things are locked up.  The digital equivalent to walls and locks is cryptography—math that makes it difficult to access a file or a computer system. Modern crypto is technically speaking, very very good. RSA-256 is unbreakable in the lifespan of the universe, assuming it’s correctly used. The problem with cryptography is that it’s very rarely used according the directions. People use and reuse weak passwords, they leave themselves logged in on laptops which get left in taxis, or they plug in corrupted USB keys, compromising entire networks.

There is a very real chance that there is no such thing as digital privacy or security; that Stewart Brand’s slogan that “information wants to be free” is true in the same way that “nature abhors a vacuum.” The basic architectures of computers, x86 and TCP/IP, are decades old and inherently insecure in the way that they execute code. Cloud services are even worse. We as users don’t own those servers, we don’t even rent them. We borrow them. Google and Facebook aren’t letting us use their services out of the goodness of their hearts, and that data that we enter (personal data in both senses) is the source of their market power. Sure, there are cryptographically secure alternatives (DuckDuckGoHushmail and Diaspora come immediately to mind), but their features are lacking and relatively few people use them. Crypto is both hard, and runs directly against the business model of major internet companies.  The best way to keep a secret is not to tell anybody, and if you have a real secret, I’d strongly advise you to never tell a computer.

Practically, not even the director of the CIA follows that advice. Unless you’re Amish, you have to tell computers things all the time, which leads to the problem of what the government can do and should not do with all that data. I personally don’t like the “if you’ve done nothing wrong, you have nothing to fear” arguments advanced by advocates of the security state, because the historical record shows plenty of good reasons to distrust American intelligence agencies, ranging from mere incompetence (missing the imminent collapse of the Berlin Wall, Iraq’s non-existent WMDs, the Arab Spring) to outright criminality (CIA backed assassinations in the 60s and 70s, COINTELRPO, Iran-Contra), but this wasn’t some kind of rogue operation: data was collected according to the PATRIOT act, overseen by FISA judges, and Congressional was informed. Certainly it was according to the letter of the law rather than the spirit, but it happened within the democratically elected, bipartisan mechanisms of government just the same. It’s hard to deny that many voters were willing to make that trade of liberty against security.

The dream of counter-terror experts everywhere is some kind of perfect prediction machine, some kind of device which could sift through masses of data and isolate the unique signature of a terrorist plot before it materializes. This is a fantasy. Signals intelligence and social network analysis is immensely useful for mapping a known entity and determining its intentions, but picking ‘lone wolves’ out of a mass of civilians is a different beast entirely. Likewise, data mining can do great work on large and detailed datasets, but since 2001 there have been only a handful of terrorist attacks in America and Europe (local insurgencies have very different objectives and behaviors). There is no signature of a immanent terrorist attack. Realistically, what these systems can do is very rapidly and precisely reconstruct the past, making the history of an event legible to determine the extent of an attack and hunt down co-conspirators.

What’s happening isn’t really surveillance; the millions of people buying 1984 are reading the wrong part of the book. Orwell’s Party is terrifying not because of the the torture chambers in the Ministry of Love, but because it can say “We have always been at war with Eurasia, and the chocolate ration has been increased to 2 oz” (when last month it was 3 oz), and what The Party says is true. Rewriting history is dangerous for nations, but as Daniel Solove has eloquently pointed out, for individuals the proper literary comparison isn’t 1984, its Kafka’s The Trial, where the protagonist is bounced powerlessly and senselessly through an immense bureaucracy.

The political problems of these programs and Big Data are not the same as the problems of secret prisons, torture chambers, and non-judicial executions, though all those things are very real and very dangerous to civil liberties. The more common assaults are the unnecessary audit, the line at the airport, the job application rejected because of a bad credit score, and the utter lack of recourse that we as citizens have against these abuses by many large scale organization, including corporations and governments.

We could mandate laws to force basic changes in how computers work and how data is collected, such as deleting everything as soon as it comes in or packing databases with random chaff.  Purely legal solutions to technological problems are almost never effective, and usually add another layer of complexity to the existing mess. As any security expert will tell you, security through obscurity is no security at all, and anonymous data is far from anonymous. Giving up and living as suspects under glass, fugitives in our own lives, is equally unappealing.

The alternative is recognizing that in a world of omnipresent computation, leaving traces behind is inevitable, but that rather than the uncertain shield of privacy, we can wield a sword of truth. To ask for privacy is to ask to be forgotten, something both impossible and generally undesirable. We should have the right to set the record straight, to demand to know what is known about us as individuals and as a population, and to appeal what are currently non-judicial and unaccountable actions. Brandeis’s right to be left alone is not the right to disappear, but rather to demand that those who would try and harass us reveal themselves and defend their actions.


This essay originally published at As We Now Think


20121220

To See More Clearly

A few days ago, Evan Selinger wrote an article on Augmented-Reality Racism which has been (unfortunately) gaining some traction around the web. I say ‘unfortunately’, because Evan is a sharp and insightful thinker who can translate dense philosophical ideas into nuanced and popular forms (see his July article on The Philosophy of the Technology of the Gun for a great example), and Augmented-Reality Racism is not that. Since I think there’s some merit to the premise, I’d like to take my own whack at it.

Augmented reality (AR) takes modern computing technology and puts it on the bridge of your nose, interlaying projected images and sounds with your view of the world. Evan hypothesizes that such a technology could be used in a racist manner, either to ‘erase’ people of a certain race from view or to become super-aware of their presence (pulse-Doppler blackdar?). He notes that technologies have frequently embedded racist agendas, like the example of Robert Moses’ low bridges on the Long Island Parkway, designed to block buses--full of black people from the city--from the beaches. Evan concludes by wondering if augmented realities designed to individualize and humanizing the masses in the crowd might be a good way to build social bonds and empathy.

There’s an irritating floppiness to the scenario (does racist AR obscure people or highlight them?), but more fundamentally, the article fails to think deeply about augmented reality or the relationship between technology and race. First AR: Augmented reality is much more than the visible front-end of a head-mounted display. AR (properly, Nathan Jurgenson’s definition of Mild Augmented Reality) is the belief that “The digital and physical are part of one reality, have different properties, and interact.” It’s about “Spiming” as much of the world as possible, so that the qualities and histories of objects can be viewed and understood in those nifty heads-mounted displays.

In many ways, the world is already augmented. Any surface covered with words and other signs and signifiers, which in certain places can be pretty much all of them, is already augmented. Awnings block the rain and advertise stores. Packages conceal the materiality of their contents, while displaying an image. What makes the new augmented reality unique is that digital information is fluid, protean, infinitely customizable and transformative. Much like alchemists, modern entrepreneurs invoke a quicksilver digital as they attempt to transmute the dull substance of commerce into glittering profits.

Race is a complicated topic, far too big to be contained in a short essay, but one of the most interesting sections in Sorting Things Out by Bowker and Star concerns the system of racial classification used in Apartheid South Africa.  From 1948 to 1994, every South African was classified as Black, White, Indian, or Coloured, with segregated housing, employment, and legal rights. Apartheid was an institutional system, a technology backed by a racial pseudo-science, for legitimating and perpetuating the exploitation and oppression of a large portion of the South Africa population.  But it was also a system for generating order, and Bowker and Star explain in detail the Kafka-esque nightmare of lives upended by the arbitrary classificatory decisions of petty bureaucrats. To make this absurd system work, the physical bodies of non-white South Africans had to be ‘augmented’ with administrative tests and pass books detailing precisely what race a person belonged to.

Now, contemporary America is not nearly as racist as apartheid South Africa, but race still matters here, whether it’s on the census form, or in the lived experience of people who experience prejudice, police brutality, and shorter life expectancy. What I find interesting is that as America has moved away from the worst excesses of Jim Crow, racism only becomes visible through technology. We know that the NYPD is racist from their own data on Stop and Frisk, which records statistically higher numbers of searches for African Americans and Hispanics and fewer cases of illegal drug or weapon possession.  If you buy the results from the Implicit Association Tests, pretty much everybody has some degree of racist sentiment. Racism as a matter of systemic bias, rather than overt discrimination, is only revealed through the augmented reality of statistics and demographics, which attach data to people.

There are also interesting patterns in how people of difference races and classes use technology, for example the now classic description of MySpace as a ‘digital ghetto’ afflicted by ‘white flight’, or how twice as many African-Americans use cell phones as their primary form of internet access compared to whites.  Race in America is more than skin color; it’s also cultural, in patterns of speech and metaphor. Even bad ideas sound plausible when presented articulately, with clean graphic design and proofreading. I wonder what would happen to political discourse if we removed this embedded bias towards certain authoritative voices by making everybody present their ideas in ERMAHGERD or after nurbling. Making the form of arguments identical (and ridiculous) might help us focus on their contents.

To return to the premise of the Augmented Reality racism, I’d take the opposite tack from Evan. If race is a matter of surface appearances, than an augmentation that erases these surface differences is likely to make us less racist on an individual level. To flip a popular saying, on the internet we’re all dogs.  And while I’m sure there are some Racial Holy War (link warning: extreme racism) types who would enjoy knowing precisely how many ‘mud people’ there are in a three-mile radius so they could feel threatened and hateful all the time, for most people being more aware of the statistical and systemic patterns of racism (link warning: awesome maps) is useful tool to engage forms of social justice we are currently ignorant of. As for humanizing people, maybe it’s holiday misanthropy, but most people are kinda terrible (link warning: internet Nice Guys), and we probably don’t want to know how much they enjoy Here Comes Honey Boo Boo, or their views on gun control, or the contents of their fridge. Apathy is the lubricant of urban living.

Evan opened with a story about his very Jewish grandmother, and so I’d like conclude with a story about my equally Jewish great-Grandmother, who had very poor eyesight and only got her first pair of glasses late in life. Right after getting her new glasses, she went for her usual walk around the neighborhood with her daughter, and began to sob.
“Ma, why are you crying?” My grandmother (then a young woman) asked her mother.
“Everybody looks so sad,” The old lady said, “Before I could see, I thought they were smiling all the time.”