Showing posts with label politics. Show all posts
Showing posts with label politics. Show all posts

20121101

Book Reviews: The Submerged State and the Righteous Mind


by Suzanne Mettler

And

by Jonathan Haidt

It doesn't take a pundit to know that American politics are screwed up beyond measure. Congress is stuck in gridlock, the economy is stalled, elections are decided by culture war attack ads, and politics itself is derided as a pursuit for liars and hustlers. Suzanne Mettler explains why we’ve become disenchanted with political solutions to our problems, while Jonathan Haidt looks at the deeper moral differences between liberals and conservatives.

The key issue is not the government we see, but the government we don't, the vast tangle of tax breaks, public-private partnerships, and incentives that Mettler deems 'the submerged state'. The size of the submerged state is astounding, 8% of the GDP, and fully half the size of the visible state: Medicare, Social Security, Medicaid, defense, servicing the debt, and the relatively minuscule discretionary funding that covers everything else the government does, from welfare to transportation to education to NASA and foreign aid.

Mettler deploys economic and social statistics to show that for all its expense, the submerged state is a failure on every level. Whatever your politics, there is something to despise about the submerged state. It represents a transfer of wealth from the poor to the wealthy, when most Americans abstractly support reducing inequality. It is a distortionary government influence on the workings of the free market, without the relativity clarity of direct provision of services or regulations. It fails to accomplish its stated policy goals of improving access to education, healthcare, and housing. It leads to civic disengagement, as those who benefit fail to see how the government has helped them, or how they can meaningfully impact politics through voting. And above all, it institutionalizes corruption, as broad public participation is replaced by the lobbying of narrowly constituted interests groups.

This book is not perfect. Mettler is a political scientist, and she has the biases of her profession: that conservatives are responsible for much of what's gone wrong with America over the past 30 years (disclosure: I agree), and that citizens would vote 'better' if they were just better informed. This book doesn't fatally harpoon the submerged state, but Mettler has marked the target for future scholars and politicians. The submerged state is a powerful lens for seeing many divergent policies as part of a broad trend towards political disengagement, and government that is not smaller, but rather inflexible and unresponsive.

In a just and sensible world, the 2012 Presidential race would be decided by the candidate’s aggressiveness in tackling the submerged state. Unfortunately, last I checked, we’re still on Earth. Democracy isn't just about the boring but necessary business of deciding who keeps the sewers running and collects the taxes, but is also about the type of society that we wish to live in. Voters don’t vote on “rational” economic grounds, but rather on the basis of shared values and aspirations.

Jonathan Haidt draws broadly from research in psychology, anthropology, and biology to develop a six-factor basis for morality (Care/Harm, Liberty/Oppression, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, Sanctity/Degradation), and show that moral judgment is an innate intuitive ability accompanied by post-hoc justifications. He argues that morality serves to bind non-related groups together, and moral skills have been favored by biological and social evolutionary mechanisms over human history.

In practical political terms, the Enlightenment morality embodied by Liberalism draws from only the first three moral factors while Conservatism draws from all six. This explains both the differences between liberal and conservative values, and why conservatives beat the stuffing out of liberals at the polls. Drawing on more complex moral framework, they are able to make more convincing arguments in favor of their preferred policies.

However, Haidt is unwilling to follow his theory to its ultimate question: Can a democratic political system that privileges the rights of minorities sustain decision-making based on all six moral factors? Care/Harm, Liberty/Oppression, and Fairness/Cheating are universal factors; everybody uses them, and aside philosophical paradoxes like the famous Trolley Problem, we agree on when they are upheld or violated. Loyalty/Betrayal, Authority/Subversion, and Sanctity/Degradation are provincial factors; they're different for every culture and every individual.

A moral order for a pluralistic society which takes the latter three factors seriously must either force people to uphold a morality they do not believe in, or segregate people based on their different interpretations of morality. Perhaps I'm sensitive to such concerns because of my secular Jewish culture, but forcing people to profess beliefs not their own, or requiring them to live in communities of only like-minded individuals is profoundly unjust, and practically impossible.

Conservatism struggles with the reality that we no longer live in separated communities. We have one global economy, one atmosphere, one water cycle, one planetary oil supply, one nuclear Armageddon, etc. Haidt faults liberalism for damaging American moral capital in the 60s and 70s, but he doesn't explain how conservative politics can govern effectively without infringing on liberty, or coalescing to gridlock.

Imagine trying to get conservatives in America, China, and the Middle East to reach an agreement about freedom of speech, the role of religion in the public sphere, or the proper authority of the state. Value conflicts would impede the necessary daily work of trade and treaties, peace and prosperity, and a shared and sustainable future. It might be a more moral world, but it would not be a better a one.

As Benjamin Franklin said, “We must hang together; else, we shall most assuredly hang separately." Liberals across the world may disagree on the details, but can broadly agree on the framework for approaching continental-scale and international policy problems. We all have the right to vote according to our values, but we should take responsibility in recognizing the limited power of law to enforce those values in others.


20121022

Something something election blog something


The last Presidential debate just finished, and it turns out that I haven’t written anything about the election all year. It’s been hard to find enough substance to meet my standards. I loved the three ring circus that was the GOP primary (Herman Cain, any questions?), but we all knew Romney was the inevitable nominee despite himself. The state of the race to 270 Electoral College votes, and the hard work of turning out the vanishingly small number of undecided voters in the handful of swing states is beyond my expertise; I’ll leave that to Nate Silver.  I just don’t have the time to evaluate in detail the candidates’ platforms and policies; not that much detail is being released. And besides, not that it’s any surprise to any of my readers, but I’m a staunch cultural Democrat, in that I’m pro-women, pro-equality, anti-war, living proof that America is not a Christian nation.

This isn’t likely to change: My earliest political memories were 1) the Clinton-Lewinsky impeachment hearings 2) the 2000 election and the Florida Supreme Court debacle, and 3) the entire motherfucking Bush administration, who’s epochal combination of incompetence, arrogance, and short-sightedness left me unable to find a single decent thing that was accomplished by the American government from 2000-2008. As far as I’m concerned, anybody who campaigns with an “R” after his or her name without renouncing George W. Bush and all his works is entirely unworthy of respect.

Of course, just because I'm decided means that I can't have an opinion. And ((spoilers ahead)), that opinion is one of cynicism and disengagement. 

I won’t be voting for Mitt Romney, as the Obama endorsement from The Salt Lake Tribune explains why in more or less the same language I’d use. The constantly shifting positions, the refusal to share policy specifics, and the very real probability that he holds a Randian ‘takers-vs-makers’ view of society, as exhibited in his infamous 47% comments, all serve to disqualify him from higher office.

On the other hand, I’m not really inclined towards Obama, even after a strong showing in this last debate. What I wrote this January is still true.

I supported Obama [in 2008] because I believed that he could articulate a vision for American democracy in the 21st century. I thought that the author of Dreams from my Father, the 2004 Democratic Convention Keynote, and the speech on Reverend Wright, would be somebody who could inspire America in the same way that Kennedy and Reagan did. We needed, and still need, inspiration more than any specific policy solution. I believed that roused to action, the American people would find their own solutions to major problems, like healthcare, energy, education, and the war.
 Instead, Barack Obama has presided over an ugly and secretive government. It is a government that uses drones to kill terrorists on the other side of the world, while making the absurd claim that “There hasn’t been a single collateral death because of the exceptional proficiency, precision of the capabilities we’ve been able to develop,” (according to senior counter-terrorism official John O. Brennandespite ample evidence to the contrary. It is a government that has failed to address basic concerns about hidden risks and ‘shadow banks’ in the financial system. And while the rancor and insanity of the 112th Congress is not Obama’s fault, the White House is little better. On the Keystone XL pipeline, and Plan B birth control pill, the Obama administration has given the impression that it does not make decisions based on evidence, or what he believes would be right for the country, but what is most politically expedient. 
I’d like a frank debate about jobs and the nature of work in the 21st century, because humans are losing to machines. We need to talk about communities and belonging, because our society is more fluid, more free, and more alienating than ever before.  We need to talk about war and peace, because we have an absurdly expensive white elephant of a military with no clear mission. And we need to seriously talk about energy and sustainability, because we get precisely one shot at technological civilization and the infrastructure that sustains us is far from secure.

But none of this happened, because the conventional wisdom is that voters care about pocketbook issues and the old staples of the culture wars. The big issues and questions don’t fit neatly into the ideological frameworks of either party. If campaigning is mostly about repeating the right set of meaningless shibboleths until 51%  of the voters decide to check the mark next to your name, then bringing up non-standard narratives is always a mistake. Who am I to criticize the electoral performance of Lee Atwater, Karl Rove, James Carville, David Axelrod and all the other operatives who have honed the tools of campaigning into a lethal arsenal.  But if we can’t talk about these political problems during a presidential campaign, then when?

Go ahead and vote if you want to. I don’t really care (unless you live in Ohio). Obama has been an adequate caretaker president at a time when this nation needed so much more. Romney has failed to demonstrate why he should have the job, and personally, I just don't like him. He fails the "who-would-I-like-to-have-a-beer-with" test. Hell, he even fails the coffee test. But the 2012 election isn't about politics or likability, at best, it's about administration. Sometimes, it seems like the most powerful man in the free world has all the independence of thought and action of a middle-school student treasurer.

Maybe this time I'll write in Cthulhu.


20120605

Guerrilla Science

Observing various scientific “controversies” over the past few years, I’ve seen a pattern repeated again and again between the scientific mainstream and dissenters. Whether it’s about global warming, a link between vaccines and autism, the safety of GMO crops, or any other issue, the conversation looks pretty much like this.

Scientists: “We have developed the following hypothesis due to a preponderance of evidence, and the scientific consensus says we should enact the following policies.”
Dissenters: “Well, that’s just your theory. What about this evidence which argues something entirely different?”
Scientists:  “That evidence is methodologically and theoretically flawed, and we have dismissed it for scientific reasons.”
Dissenters: “No, you’re dismissing it because you’ve been bought off by Big Pharma/Monsanto/Al Gore!”
Scientists: “Well, you’re anti-science, and you aren’t responsible enough to participate in this debate. Come back when you’re willing to accept the truth.”

And then the two camps go their separate ways: The dissenters to fringe websites where they catalog the corruption of mainstream science and develop their alternative bodies of evidence, and the scientists to letters pages of scientific journals, where they write pleads for better science education and communication, so that science can drive out all the frauds out of the public sphere and we can have rational policies again, like we did in the good old days.

The other thing that I’ve observed while following these controversies is that mainstream science is losing. The American political system has become entrenched around the truth or falsity of anthropogenic global warming, despite an overwhelming scientific consensus that it is happening and it is a problem. Fewer people completely vaccinate their kids each year, even though the original Wakefield study has been totally discredited and disease rates are on the rise. And fears of GMO have become a permanent part of European politics, and a rising force in America and China.

This leads to one of two conclusions: either despite all the calls by respected scientists for more communication and education efforts by the scientific community have been falling short and should be increased, or the conventional framing of the problem is essentially wrong and misleading. I believe it is the latter; that the arena of public scientific debate has changed in recent decades, that the dissenters are “guerrilla scientists” who like guerrilla fighters use asymmetric strategies to avoid the superior strength of their foe, and that to win, mainstream science must find an equally adaptable counter-strategy.

To explain this idea, I’m going to need to talk about science and guerrilla warfare. Please bear with me.

As a PhD student in science and technology studies, one of the biggest questions that we face is “What is science?” There are lots of good definitions: facts about the natural world, systemic knowledge, a method for generating said facts and knowledge and ensuring their reliability using experiment and observation, but all of these definitions conceal the process of how science is made; how specific claims become true facts or false hypothesizes. To understand that process, we need to go inside science, inside scientific writing, and inside the lab. Bruno Latour has developed some of the most powerful lenses on the actual practice of science in his books Laboratory Life and Science in Action.

For Latour, science is a rhetoric; a way of convincing other people to believe your claims. The form of the modern scientific paper has been careful developed to be as convincing as possible. A successful scientific paper integrates itself in a network of previous scholarship, explains how it will extend the results of previous scholars, presents a method that can be duplicated by others, shows results (typically in graphical form), and then discusses those results. I’m going to use as my example a paper by the other author of this blog, “A model for the origin and properties of flicker-induced geometric phospenes”, but any other paper would work just as well.

The paper begins by summarizing the previous work in the field, starting in 1819 and moving through the 1970s and into the present day. The introduction establishes the paper vis-à-vis previous work on the visual system, and a question about the origins of flicker in either the retina or the visual cortex. The method section describes using the Wilson-Cowan equation for modeling flicker in a simulated neural network, how to implement that equation in a computer program, the images that are produced by the model, and finally a discussion of how those images might relate to what happens in the brain and what we can perceive when we close our eyes and press on our eyeballs (or use ze goggles).

At every turn, the paper preemptively parries those who would try to doubt it. “You think that this problem is unimportant. Here are people who have worked on it before me.” “You doubt my math? Write your own program and check my results.” “You disagree with my choice of the Wilson-Cowan equation? Here are 1040 papers that also use it. Do you disagree with all of them?” The paper is structured and linked such that to disagree with it either requires opposing a much larger and more authoritative body of scholarship than “things that Mike Rule, Matt Stoffregen, and Bard Ermentrout say”, or going into their lab, checking that all their machines work, that the graphs in the paper are actually reproducible, and essentially duplicating their effort and expertise.

This post-modernist view of science can be disconcerting at first; what about objective physical reality? What about the search for truth? Has science just become another kind of blind faith, based on appeal to past authority?  No. What Latour tells us is that scientists do not know a priori what is ‘real’ and ‘true’. Those words are only applied to hypothesis after an intense process of purification and examination by the community of scientists that rules out every other possible explanation.

The picture of science that Latour develops is an interlocking network of claims about the natural world, linked into a mutually reinforcing pattern with more accepted claims at the center and weaker claims at the fringes. Science as a totality is like a fractal star fort, defensible from any angle.  But this picture is partial and passive.

The life of science is in active disputes, for example, “What is the structure of DNA circa 1953?” Disputes are opposing versions of reality, and they are only settled by the destruction and absorption of alternative facts and theories into the final ‘science truth’. Actor-Network Theory describe this process as one of enrollment wherein scientists enlist facts, instruments, and people as allies in their cause, with the aim of building the strongest rhetorical network.

Looking at this, it struck me that the contest is much like a battle, with the scientist deploying his or her enrolled facts like a general committing his soldiers. There are many points of congruence between these models: critical questions and strategic points, the ability to generate new results and supply, predictive power and firepower, but they key point is that in science as in war there are rules.

Science has its own kind of Geneva Convention. Not an explicit treaty, but social norms that describe how science should be done, and how the contest should be decided. I’m not going to provide a complete set of norms, but a some of the more important ones might be: results must be reproducible; judge the idea and not the person; cite your sources; do not present others’ work as your own; remain open-minded; accept the accolades of your peers humbly. A scientist whole held onto a clearly discredited theory would not be respected, and it is considered bad form to pillage a rival’s lab and enslave their postdocs.

In action, the difference between mainstream scientists and dissenters is that dissenters don’t play by the rules. Dissenters care more about their personal commitments than the structure of science as a whole. They do not cede the field gracefully if their facts are overturned. And they accept a wider variety of evidence as a basis for their claims, including social, moral, economic, and political factors. They may not be good scientists, their theories are frequently shoddy and mystical, but it is important to recognize that they are engaged in essentially the same kind of work as mainstream scientists: making cause-and-effect claims about the natural world. Calling dissenters ‘anti-science’ implies that scientists should ignore and belittle them as unworthy of serious critique. Calling them guerrillas suggests a very different approach for understanding their aims and engaging with them.

Guerrilla warfare is political warfare. In conventional war, the goal is defeat of the enemy through decisive battle, and strategy is the art of staging the decisive battle on favorable terms. The aim of guerrilla warfare is to demonstrate the political illegitimacy of the people in charge while building popular support for revolution. Strategy is focused on swift strikes to demonstrate the ineffectiveness of the governance and provoke reprisals against the people, the preservation of the guerrilla’s own forces, and the use of time to wear out the enemy’s will to resist. While conventional military firepower is still important in guerrilla war, it is of secondary importance compared to psychological and political factors. In guerrilla warfare, the winner is the side that everybody believes has won; not the side that maintains control of the battlefield afterwards.

Guerrilla warfare is a complicated subject, and no two conflicts are alike, but some common patterns can be drawn. Lt Col John Boyd developed a theory of warfare based on learning systems, and he noticed that as a force slides towards defeat, it becomes isolated and insular, it stops taking in information from the outside world, and is eventually confined to irrelevance. Information and morality are central; as the American military learned in Vietnam in the 1960s and in Iraq in 2003 and 2004, firepower is useless if targets cannot be located, and support can only be gained through demonstrating moral strength and sensitivity. To beat guerrillas, the government must demonstrate its superiority through active policies that improve the lot of the people while avoiding internal corruption. Successful counter-insurgency strategies, such as the Iraq Surge implemented by General Petraeus, aims to isolate guerrillas, to draw wavering fighters back into the government’s camp, and to find and kill the most hardcore commanders who could not be converted.

Combining these two theories, Latour’s Actor-Network Theory and Boyd’s OODA Loop, the shape of the problem and its solution begin to emerge. Scientific guerrillas exist because scientific expertise is a key buttress of democratic decision-making. 21st century American culture is such that a policy must appeal both to the will of the people and an external reality as informed by expert, i.e. scientific, opinion. But science, to put it bluntly, is hard. It requires a long and grueling apprenticeship and then access to expensive and specialized laboratory equipment. And worse from a political perspective, science is not democratic. No matter how many geeks wished that the OPERA neutrinos were truly faster-than-light, that result stubbornly remains an experimental error. It’s far easier to don the guise of expertise when it’s needed to support a policy position than it is to genuinely discover the truth according to the strict rules of science.

In this context, saying that the dissenters need to play by mainstream standards of evidence is like saying that we just need Al Qaeda to put on uniforms, gather around Tora Bora, and have that decisive battle we’ve been waiting for. It’s a fantasy, because it involves convincing guerrillas who are winning to fight a conventional battle that they will surely lose. Science education, science funding, and more public understanding of science are equivalent to sending in more troops, more weapons, more airstrikes. It can stabilize the situation, but it is unlikely to actually defeat the guerrillas.

I worry that science is becoming isolated in a Boydian sense. Scientific papers only cite other scientific papers; most scientists work and live in enclaves around major research universities. There are extremely good reasons for this, from a conventional perspective it generates stronger science, but it has also made science more brittle, less relevant, and less politically legitimate.

Like it or not, scientists have become embroiled in a wide variety of guerrilla disputes on major issues, and I’ve not seen a robust strategy for countering the guerrillas. I love and respect science; it’s the best tool for understanding and improving the world that we have, but it is under attack in ways that most people can’t even see, and is not effectively defending itself. Guerillas can be beaten, but it will require an active strategy of integrity, candor, and two-way communication. The stakes could not be higher. As Henry Kissinger said on the Vietnam War in 1969, “The guerilla wins if he does not lose; the conventional army loses if it does not win.”



20120508

Drone Wars

Well kiddies, guess who just got an op-ed published in The Cairo Review. Guess this makes me some kind of international policy thing now.

 Warfare is partly defined by the images of its weapons, from medieval knights in armor clashing on the battlefield to the mushroom clouds of modern nuclear weapons. For warfare in the twenty-first century, consider the image of a video screen. In September 2000, the counter-terrorism advisor in the White House, Richard A. Clarke, watched a video of a tall man in white robes. The man was probably Osama Bin Laden, who by that time had organized the attacks on the American embassies in Tanzania and Kenya. The man’s location was a compound outside Kandahar, Afghanistan. The videographer was a robot, an RQ-1 Predator drone aircraft.

Clarke, along with two senior Central Intelligence Agency officials who were also present, Cofer Black and Charles E. Allen, recognized the Predator’s potential to revolutionize national security by providing real-time intelligence for precision missile strikes—using manned or unmanned weapons—on enemy targets. Then they put the idea aside, waiting for an opportunity when a drone mission might be the best weapon for a job. After the September 11, 2001, terrorist attacks on New York and Washington, DC, armed drones were targeting terrorists as well as providing air support for Special Forces troops in Afghanistan and Iraq. One decade later, the armed Predator is a key instrument of American statecraft. Missiles launched by the drones rain down over the tribal areas of Pakistan, Yemen, Somalia, and Libya, killing figures linked to Al-Qaeda or the Taliban, such as Anwar Al-Awlaki, Baitullah Mehsud, and Badar Mansoor, as well as thousands of foot soldiers and a significant number of civilians.

All of this is happening without very much awareness in the United States. The Pakistani government, the American Civil Liberties Union, the United Nations Human Rights Council, and Amnesty International—among others—have condemned the ethics and legality of America’s Drone Wars. The strikes are deemed violations of national sovereignty and a tool of war that inevitably leads to the deaths of innocent civilians. These moral and legal arguments are important, but they have failed to stop the Drone Wars, or even initiate serious public debate on the uses, merits, and limitations of this kind of warfare. Perhaps before asking questions like “Is the Predator drone an ethical weapon?” or “Is its use in this particular conflict within the boundaries of international law?”, it is important to understand what the Predator drone is, how it came to be armed, how the armed drone changes military capabilities, and—most important­—how the drone program evades democratic accountability.

Read the Rest


20120413

Beyond Space Exploration

Predrag Bokšić | perceptron
This past Friday, my good friend and colleague John Carter McKnight organized a little stealth seminar on space travel, and detecting small trends in the social media sphere before they blow up. These days, John does the politics of virtual world, but about a decade ago he was active in the spaceflight movement (blast from the past, eh John?) before the whole thing got swallowed by the invasion of Iraq and America’s imperial mission to search and destroy every bad person on the AfPak border with robots. But he and fellow mad social scientist Kathryn Denning (part of the team that won DARPA’s 100 Year Starship Challenge) had a conversation about the viability of private spaceflight, and the notion that a bunch of tech billionaires (Elon Musk, Paul Allen, Jeff Bezos, Robert Bigelow) have the resources to launch their own space program, and damn government policy or economic rationality! They want to go to space because it’s cool and they have the money.

There are a lot of new actors on the stage, but at the end of the day, the big money is still with the government, either in NASA or the military, and if space flight is really our human destiny, it’s going to need public buy-in. There are major technical challenges to putting large numbers of people in orbit: launch costs are still too damn high, running a closed-cycle life support system is open problem, and zero-gravity is tough on the human body. If this stuff is what you care about, I recommend Project Rho. But I’m not an engineer and I don’t do technical fixes.

My problem with space these days is that the rhetoric and policies are dated bullshit, and rather than try and come up with new justifications, space advocates just double down on same old arguments. For example, let’s take Neil deGrasse Tyson’s recent testimony before Congress.

“The only people doing much dreaming back then [late 1950s-early 1970s] were scientists, engineers, and technologists. Their visions of tomorrow derive from their formal training as discoverers. And what inspired them was America’s bold and visible investment on the space frontier.

Exploration of the unknown might not strike everyone as a priority. Yet audacious visions have the power to alter mind-states — to change assumptions of what is possible. When a nation permits itself to dream big, those dreams pervade its citizens’ ambitions. They energize the electorate. During the Apollo era, you didn’t need government programs to convince people that doing science and engineering was good for the country. It was self-evident. And even those not formally trained in technical fields embraced what those fields meant for the collective national future.”


Let me just grab some key words: Dream, ambition, discover, explore, frontier. These are the core rhetorics of the Space Race, and NdGT argues that by refunding NASA and putting these ideals at the center of the national mission, we can inspire the future. The problem with his argument is that it’s causally reversed. These ideas inspired a generation of scientists because they related to the immediate political and cultural concerns of the era.

The ambition of the Space Race was tied up with a competition for national prestige between the USA and USSR. The competition for space demonstrated the technical capabilities necessary to fight a nuclear war to domestic, opposed, and unaligned audiences. More than that, however, the space race transformed the unthinkable 45 minute annihilation of an actual nuclear war into a human drama. As Tom Wolfe explains in the authoritative cultural history of space, The Right Stuff, astronauts were modern knights, champions of democracy who put their courage to the test by riding flaming steeds into orbit.
What it was was a matter of prestige, and prestige only goes so far as a rationale for any activity. Keeping up with the Joneses only makes sense when there are Joneses, and the world of today looks very different than the bipolar geopolitics of the Cold War. China and India are not threats in the same way that the USSR was, and their ability to match 40 year old American accomplishments is not seen as diminishing the prestige of being first. Indeed, if you look at the List of Space Agencies, you’ll see some surprising countries: Greece, Nigeria, Mongolia, Sri Lanka. For these small states, having a space program is prestigious; just launching a satellite is a major accomplishment. A similar logic applies to individuals in the private spaceflight: a successful launch proves their engineering chops, while being one of the very few private astronauts gives unique bragging rights. But as space flight becomes more common, it must become less prestigious.

Second, space flight is heroic only to the extent that it is dangerous. People watched rocket launches because there was a very real chance that the rockets would explode. Space flight, once it became relatively safe, stopped attracting public interest. There are very good human and economic reasons why we want our rockets to be as reliable as possible, (and drawing the crowd that watches NASCAR for the crashes is not exactly a high ambition), but NASA’s zero-risk attitude is anathema to innovation.

In fact, I think there might be an argument that competing against the prestigious missions of the past is one of the things harming NASA. The marginally improved next-gen launcher is discarded in favor of some aspirational, transformation project. Politicians and the public expect a mission to match the heroism and drama of the moon landings, forgetting that much of the heroism and drama was retrospective. Modern space flight is compared to its heroic past, and inevitably found wanting.

The other argument that has to be deconstructed is “exploring the frontier”. You want to explore space: go outside and look up. In the Age of Exploration, people had to sail on ships to other places because the world is curved and you can’t see over the horizon. While there are legitimate disputes about manned spaceflight versus robotic probes, telescopes unarguably tell us more about the universe than any reasonable manned interstellar mission would. On a cosmic scope, space exploration feels less like a journey into the unknown, and more a paddle across the lagoon. Yes, it is dangerous and challenging, but we can see the far shore.

The Age of Exploration rhetoric also ignores the commercial motives of the European explorers, who sailed around the world to trade with, colonize, and conquer native people. Exploration was at its core a human and economic effort. Not until the 18th century and the voyages of James Cook did pure science become part of the motive for exploration.

Space is a great resource for pure science, like telescopes and Earth monitoring satellites, but the economic motive is harder to find. There’s no one to trade with, and the most accessible resource is simply altitude to use for communication and surveillance system. Asteroid mining and other space resource extraction is uneconomic because spaceflight is expensive, and spaceflight is expensive because there’s no economic reason to go space.

Tyson, Elon Musk, and other space-flight advocates hope that one day the economic motives will be self-sustaining, but until then they desire access to public resources and play on national pride, engineering excellence, the value of pure science, and other essentially technological arguments to obtain them. But as declining interested in human space flight shows, people can see through these narrowly constructed rhetorics of pride and exploration.

I think one solution is to increase our tolerance for risk (the hundreds of people volunteering for one-way-to-Mars shows that there's something there), and we need heroes. We would also need to accept that some of them would die, and I'm not sure if we can do that. Another low-hanging economic mission that needs more effort is cleaning up space debris: some orbits are close to unusable, and maintaining the rights to navigate in space is a reasonable extension of the traditional government mission to keep open sea lanes. I’m not sure what is beyond space exploration, but I can say that harping on these same two points is not going to get these people the Mars missions they want.


20120403

Trayvon Martin and Outrageously Bad Decisions

Unless you’ve been living in a cave, you’ve heard about Trayvon Martin, and his death at the hands of George Zimmerman, a Florida “community watch captain.” Trayvon Martin’s tragic death has prompted a national debate about racism in 21st century America, “Stand Your Ground Laws” and the expansion of gun rights, and how biased the LIEberal Media is (Warning: Link to Fox News, go down the rabbit hole at your own risk). In all the heat about hot-button issues and the he-said-she-said arguments about the exact circumstances of events surrounding the shooting, we’ve lost sight of two important issues.

George Zimmerman shooting and killing Trayvon Martin was a human tragedy, but what transforms it from a tragedy to an outrage is that George Zimmerman walked out of police custody without being charged with any sort of crime on the recognizance of a very small group of police officials. We like to believe that the justice system is about finding the truth; that the law looks like 12 Angry Men. But the truth does not exist out in the world, to found and collected like a pebble. In law, as in science, the truth is constructed: A single version of reality emerges from the rhetorical contestation between opposing parties.

We accept how the legal system constructs the truths of innocence or guilt because it combines the technocratic expertise of lawyers, prosecutors, and judges, with the democratic deliberation of a jury of our peers. By and large, we believe that the system works, and in cases where it does not work, we can point to the transparent functioning of the system, and critique how the case diverged from our desire for justice.

In the Trayvon Martin case, justice was constructed by the Sanford Police Department, interpreted according to a relatively new law passed by a vocal and powerful political minority of gun right’s advocates, rather than the broader Common Law understanding of murder and culpability. That a man can walk away from a shooting death, without any sort of judicial inquiry, purely on the recognizance of the police, is outrageous.

The second fact which has been lost is that the reason Trayvon Martin was walking down that street at that time was that he had been suspended from school for 10 days for “possession of a pipe and a baggie which may have contained marijuana.” Some conservative pundits have been using this as evidence that Trayvon Martin was a thug and Zimmerman was right to shoot him, but really: A 17 year-old smoking marijuana? Oh My God! The Horror, the Horror! Call the DEA and Interpol, there’s a dangerous criminal on the loose!

No. What’s outrageous is that we believe that kicking a kid out of school is a reasonable form of discipline. Discipline is supposed to be an act of ‘strict training’, according to Foucault’s reading of 18th century education structures. Suspension teaches a student that misbehavior results in more free time, and upon return to the classroom, a greater degree of confusion. Perhaps the rationale behind suspension is that it is supposed to provide time to reflect on one’s errors, in the manner of the penitentiary (literally a place to be penitent), but this implies an unrealistically optimistic appraisal of teenager’s ability to reflect. The only way in which suspension might be effective is in removing a disruptive student from the population so that others can be educated. But even if that is true, the way that suspension is applied is piecemeal and ineffective.

It’s a undeniable statistical fact that being young, black, and male in America is a very bad idea. Black men are massively over-represented in suspensions, prisons, and ultimately the morgue. A slippery slope leads from a youthful disciplinary violation to lower grades, reduced economic opportunities, and higher crime. Even if Jim Crow is dead and buried, we still absurdly punish people for their skin color rather than their choices. This is morally wrong, and this is a persistent human tragedy, and it is one that we as Americans have accepted and ignored and for far too long. There are no simple solutions here, no easy path to justice.

The death of Trayvon Martin is not the result of a broad cultural problem about which we can wash our hands and say ‘it’s just too big to fix, so sorry.’ Rather, Trayvon Martin is dead and we are angry because of the decisions of smalls group of bureaucrats who have made policy in a way that is easy for them to administer but socially injurious. This is a rightful target for our outrage: the persistent corruption that allows schools to slowly fail ‘problem’ students without consequences, and a police force that protects and serves the interests of the powerful rather than the weak.


20120228

Beyond Bell Labs

One of the ideas that I’m perennially kicking around is social support for science, or more precisely, “What kinds of science?” and “Why should the government support it?” When these questions are asked, the answer usually centers around some type of Basic (or Pure, or Fundamental) Research: Research without obvious applications, research that underlies other, more useful forms of science, research that should be funded by the government because, as a non-rival and non-excludable public good, it will be underfunded by the private sector. As conventional wisdom has it, basic research is a core input for economic innovation, and economic innovation is good for everybody. But really, when you look beyond the platitudes, what are we trying to do with science?

A recent New York Times profile on Bell Labs has brought my thoughts on the matter into sharp relief. You should really just read the whole piece, but if you’re not familiar with Bell Labs, they invented much of the 20th century, including the semi-conductor, lasers, fiber optics, communications satellites, digital cameras, UNIX, and the C programming language. Why was Bell Labs so successful?

Quite intentionally, Bell Labs housed thinkers and doers under one roof. Purposefully mixed together on the transistor project were physicists, metallurgists and electrical engineers; side by side were specialists in theory, experimentation and manufacturing. Like an able concert hall conductor, he sought a harmony, and sometimes a tension, between scientific disciplines; between researchers and developers; and between soloists and groups… Bell Labs was sometimes caricatured as an ivory tower. But it is more aptly described as an ivory tower with a factory downstairs. It was clear to the researchers and engineers there that the ultimate aim of their organization was to transform new knowledge into new things.

[Mervin Kelley, Director of Bell Labs] gave his researchers not only freedom but also time. Lots of time — years to pursue what they felt was essential… In sum, he trusted people to create. And he trusted them to help one another create. To him, having at Bell Labs a number of scientific exemplars — “the guy who wrote the book,” as these standouts were often called, because they had in fact written the definitive book on a subject — was necessary. But so was putting them into the everyday mix. In an era before cubicles, all employees at Bell Labs were instructed to work with their doors open.

In essence, Bell Labs took the best in the world and aimed them towards “use-inspired basic research”, what science policy scholar, academic administrator, and NSF advisor Donald Stokes identified as Pasteur’s Quadrant. This kind of research aims at both a deeper understanding of the universe and immediate application to the social good, with Pasteur’s work on the bacterial origins of disease being the prototypical example. The standard narrative is that this type of ground-breaking, profitable, and socially useful research has ceased to occur. Stokes argues that Pasteur’s quadrant has no public advocate. The American scientific system as it exists in universities does “basic research“, using the policy justifications laid down in the cornerstone document of American science policy, Vannevar Bush’s Science: The Endless Frontier. Mission agencies, such as the Department of Defense, fund “applied science” that address pressing issues such as creating a plane invisible to radar, without concern for advancing theory. And since corporations have cut strategic research and development centers like Bell Labs or Xerox PARC in pursuit of short-term profits, nobody is doing what is actually the most significant type research.

Another explanation is that politics poisoned the Republic of Science. Instead of pursuing truth, scientists were forced to chase Federal grants that directed research towards conventional, less risky, and less appealing science. As PayPal founder Peter Thiel elucidates in a recent interview with Francis Fukuyama:

Peter Thiel: My libertarian views are qualified because I do think things worked better in the 1950s and 60s, but it’s an interesting question as to what went wrong with DARPA. It’s not like it has been defunded, so why has DARPA been doing so much less for the economy than it did forty or fifty years ago? Parts of it have become politicized. You can’t just write checks to the thirty smartest scientists in the United States. Instead there are bureaucratic processes, and I think the politicization of science—where a lot of scientists have to write grant applications, be subject to peer review, and have to get all these people to buy in—all this has been toxic, because the skills that make a great scientist and the skills that make a great politician are radically different. There are very few people who are both great scientists and great politicians. So a conservative account of what happened with science in the 20th century is that we had a decentralized, non-governmental approach all the way through the 1930s and early 1940s. At that point, the government could accelerate and push things tremendously, but only at the price of politicizing it over a series of decades. Today we have a hundred times more scientists than we did in 1920, but their productivity per capita is less that it used to be.

Francis Fukuyama: You certainly can’t explain the survival of the shuttle program except in political terms.

Peter Thiel: It was an extraordinary program. It cost more and did less and was probably less safe than the original Apollo program. In 2011, when it finally ended, there was a sense of the space age being over. Not quite, but it’s very far off from what we had decades ago. You could argue that we had more or better-targeted funding in the 1950s and 1960s, but the other place where the regulatory situation is radically different is that technology is much more heavily regulated than it used to be. It’s much harder to get a new drug through the FDA process. It takes a billion dollars. I don’t even know if you could get the polio vaccine approved today.

The scholar in me must add that Peter Thiel’s understanding of American science policy is very ahistorical, if not flat-out wrong. The current science policy and science funding apparatus that Thiel rails against is inherited from the Cold War, and that system was in turn developed from the research system set up during World War II. During this time, the Office of Scientific Research and Development was able to direct a much smaller scientific community in developing radar, computers, and the atomic bomb because its director, Vannevar Bush, personally knew every scientist of importance in the nation. And even then, the system directed the lion’s share of grants towards a handful of top universities, including John Hopkins, MIT, and Caltech. Vannevar Bush, for all his talents as a scientist and administrator, thought that the digital computer and rocketry were just fads, and would never amount to anything. If Vannevar Bush had actually been given sole, long-term control of American science policy, he would have delayed many fruitful fields of research, and likely have been the subject of high-profile hearings on cronyism and corruption in science, not from malfeasance per se, but just from the nature of his management style (you can see an echo of this in the allegations around DARPA director Regina E Dugan and RedXDefense, LLC). The NSF and NIH are not perfect organizations by any means, but they have managed to avoid such massive and obvious failure over the past 50 years. Pretty good for agencies that haven't had a clear national goal since the collapse of the Soviet Union.

To return to the questions posed at the start of this essay, what is it about basic research that is important for innovation? I’d like to offer an operational definition of research: Research is what scientists do. And what is it that scientists do? At the highest level, ignoring the details of any particular field of research: They observe things; they measure things; they change conditions and see how the measurements change; they repeat the changes and the measurements; they develop some sort of theory about what’s going on; and then they write up their results.* Sometimes the results get written up as a journal article, in which case it’s basic research. Other times, they get written up as a patent application, in which case, it’s applied research. If nobody write about it, than nobody learns about it, and it dies. Publishing is at the heart of science. The Royal Society started as a club to share the results of 17th century natural philosophers, and was widely emulated across the continent, which is why some scientific journals are still called the The Letters of Such and Such Organization.

What I want to draw out here is that neither articles nor patents fit neatly into Stokes’ concept of Pasteur’s Quadrant. Attempts like university technology transfer offices and the Bayh-Dole Act to bridge these forms of publishing are crude hacks to get both patents and articles out of the same body of work. While the form and content of a scientific article or patent is basically arbitrary, in that there’s no reason why they have to look the way that they do as opposed to some other form, there is something to the idea of a separation between Ideas and Things, and the different standards of scientific success to each realm. But is the minimization of Pasteur’s Quadrant and innovation merely an artifact of the publishing process? Again, I think not.

What is it that distinguishes “real science” from the kind of thing that’s done in a high-school classroom? What is it that distinguishes a scientist from a non-scientist? The questions are related: In a high-school experiment the answer is in the back of the book, while in a real experiment the answer is not yet known. And a scientist is somebody who has made a contribution to the collective body of knowledge by solving an unknown problem. Or to use an operational definition, a scientist is somebody who has earned their PhD by completing a dissertation and convincing a committee of current scientists of its validity and novelty.

Essentially every professional scientist has a PhD (counter-examples welcome), and many scientists spend much of their time helping younger scientists earn their dissertations. Working backwards from our operational definition of science as what scientists do, and adding in the idea that all scientists have to earn a dissertation, I’d like to propose that basic research is any scientific problem posed such that a reasonably bright individual might be expected to solve it in the course of earning a PhD.

Where this gets tricky is that not all scientific problems are created equal. Some have clear and immediate applications (how do we cure this disease?), others are easy (what do cows eat?), some are opaque (what is ‘time’ made of?), and some are hard (how do we make net-energy-positive fusion?).** Most problems lie somewhere in between, but after several hundred years of directed scientific endeavor, I think that I can safely say that a lot of the low-lying fruit, easy problems with obvious applications, have been solved. What is left is either very hard or irrelevant to useful ends. Because basic research is operationally defined as solvable, it must therefore be irrelevant.

Basic research serves a clear purpose. We need a class of problems to separate people capable of doing science from those who cannot, and to separate good scientists from bad scientists (unless you trust Vannevar Bush and/or Peter Thiel to just write checks to the smartest scientists they know). There are creativity and problem solving-skills that a person in the process of formulating a novel hypothesis and proving original conclusions cannot be obtained by replicating known results. And demanding that every PhD candidate be an Einstein or a Watson or a Crick is unfair to the vast majority of very capable scientists who will never win the Nobel Prize.

Basic research is necessary for renewing and sustaining a vibrant scientific community, but I think that scientists by-and-large are not taking the training wheels off their research. There are plenty of reasons to spend a career doing basic research: hiring decisions are based on publications, grants frequently demand results in a year or two, and the psychological reward of completing a project or becoming the world expert in some sub-sub-sub-field all bias scientists towards ‘do-able’ basic research rather than high-impact problems that may take years and yield no result. But what was once a program to create new scientists has become the raison d’etre of science, to the detriment of both innovation and the public support of science.

These incentives are both perverse and pervasive. My colleague John Carter McKnight wrote in an astute post on research and impact that:

“The system – precisely like the Soviet economy (look, I’m not going Gresham’s law here – I actually have a master’s degree in Soviet economic systems. Don’t ask.) doesn’t require quality in output past a bare minimum of peer review (which like Soviet production standards is gamed – since we all need to produce volume, we’re incentivized to accept crap output from others in return for their accepting our crap output) but rather quantity. Basic human nature points to a race to the bottom, or producing to the minimum acceptable standard.”

While John was writing about the humanities, the same argument applies to the sciences, where 40% of papers are not even cited once. Even scientists find other’s basic research boring and irrelevant.

During the Enlightenment, natural philosophy was reserved for wealthy gentlemen and those experimentalists who could secure a patron. These days, Big Science projects like the Large Hadron Collider, the Human Genome Project, or research into alternative energy are beyond the abilities of any single individual—breakthroughs require collaborations of large groups of people over years if not decades. Yet at the same time, big projects require consensus and generate their own momentum; they are ill-suited towards nimble, intellectual ventures. What kinds of institutions support good science?

Bell Labs was great in its time, but was ignominiously shut down in 2008, and no other company has stepped up. The Manhattan Project was a major success, but at any time other than a national emergency would have ended the careers of everybody involved due to waste and duplication of effort (four sites, three methods of separating fissile material, and two bomb designs). The government’s networks of in-house laboratories run by the Department of Energy, Department of Defense, NASA, and the National Institutes of Health don’t have the same kind of prestige or success that Bell Labs once held. This might be because they’re just as beholden to the yearly Congressional budget cycle as corporate labs are to quarterly reports, with the impossibility of becoming rich or famous, or it might be because they’re typically funded at a compromise level that stifles success and encourages conservatism rather than economy (what’s the tally on abandoned NASA rockets since the Space Shuttle?). The logics of maximizing short-term political benefit (aka Congressional pork) while holding down long-term costs has gotten us fiascos like the Joint Strike Fighter, a space agency that cares more about holding onto decaying facilities than doing science, and a glut of NIH lab space. Fiddling with these big institutions at the margins is just that, fiddling.

I think there’s something to these operational definitions, so let’s try and operational question. “How can we encourage worthwhile science while minimizing the long tail of boring crap?” The New York Times article that lead this piece talked about linking ivory-tower theories to the factory floor, and giving smart people time and freedom. I’ve talked about articles, patents, salaries, and other incentives. A great article in the New Yorker by Jonah Lehrer says that architecture itself can inhibit or produce creative thinking. But all of this is missing something key. To paraphrase Clausewitz, “Science is done by human beings." Human beings grow up, grow old, and die; scientific institutions are designed to live forever. What if immortal scientific institutions are failing science as a human endeavor?

Bell Labs managed to draw in the best minds of an entire generation, and then slowly faded away. The engineers that built the Apollo project couldn’t find a worthy successor for their energies. From Steve Jobs to the Lockheed Skunk Works or the classic The Soul of a New Machine, we see charismatic leaders taking teams of dedicated young engineers to the breaking point and beyond in pursuit of real innovation, and those teams falling apart afterwards. When I was applying to grad school, a mentor told me “Don’t go to [University X]. They did some great work in the early 90s, but they haven’t moved since.” Scientific institutions, as real entities staffed by human beings rather than abstract generators of knowledge, have a life-cycle.

The age of Nobel Prize winners and first-grant awards has been slowly rising, and while the exact causes and effects are uncertain, I think that might be one indicator that the institution of science is slowing down. In a scientific version of the Peter Principle, we take the best scientists and promote them into administration where they spend their time writing grants and herding post-docs rather than doing science. We make young scientists jump through an ever more complex series of hoops to get access to the good equipment and the big questions. The structure of science has become pyramidal, and old men guard the top. It’s no wonder that so much research is trivial, conservative, and aimed at the next rung in the career ladder rather than shaking the foundations of knowledge.

So this is my humble proposal for fixing science. Stop trying to turn undergrads into grad students into professors into emeriti. Stop running the whole endeavor like some sort of backwards business, with metrics for impact within a department and no reward for doing anything outside your little field. Stop making the reproduction of the social structure of science the highest goal of science.

What if we just gave large groups of young people some basic training, equivalent to passing comps in a PhD program, and then let them lose in the lab? I’m not talking about small scale here. Why not throw open the doors to the Goddard Space Flight Center and Lawrence Berkeley National Laboratory to the brightest and most ambitious hackerspace DIYers and say “All this is yours. Show me something cool.” Let them govern themselves through some kind of Parecon system, with only a minimal level of government oversight. If an experiment fails, well, science is uncertain. If they haven’t done anything worthwhile in 5 years, well, maybe their funding should be cut.

One of the basic principles here (and this might be naïve), is that people can actually work together in good faith towards common goals. I remember from my time at Caltech, where collaborative work was a core principle, that people naturally formed study groups with others that they could work well with. Make the core group of each lab similar in age and experience to deliberately minimize the effects of bad expert knowledge and hierarchies based on authority rather than expertise (Clarke’s First Law: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.) If somebody isn’t cut out for science, they’ll be gently eased out. Real peer review, rather than the kabuki theater currently practiced by the journals.

What I want make explicit is that each of these labs is by design a temporary entity. They’ll attract a flourishing community at their founding, and then slowly be pared down to basic core. While they might be centers of scientific learning, I wouldn’t let young scientists spend more than a few years at a lab, and labs would be barred from recruiting. Each generation must make its own scientific center. And when any given lab is haunted by just a few old-timers, throw open the doors to a new generation of scientists to hack ancient experimental equipment and learn from the Freeman Dyson-types hanging around.

This is just a utopian sketch, not a practical plan, and there are lots of open questions. Without strong ties to commercial or political end-users, might science just drift off into solipsistic irrelevance? Would breaking up labs by generation inspire true interdisciplinary research, or merely deprive junior scientists of expert mentoring? How would the funding and governing mechanism really work, and how would we prevent corruption and pathological accumulations of power? I don’t have good answers to these questions, but I think that there might be something to linking the dynamics of scientific (and economic and political) institutions to human cycles rather than some arbitrary standard of knowledge. And could it really be worse—more expensive, less innovative, and less personally fulfilling—than the current system?

((And I wouldn’t drag you, my loyal readers, through 3500 words on science policy without some kind of payoff in the form of a speculative proposal))

*I fully expect you guys to tear this definition to shreds.

**And yes, I’m blurring the lines between science and technology here. You know what I mean, deal with it.


20120117

Why Andrew Sullivan is Wrong About Obama

Andrew Sullivan has been blowing up the internet with an article about how Obama has outsmarted his critics on the Left and on the Right, by playing a long game that has allowed him to achieve meaningful policy advances without grandstanding or drama. Yet while Obama has achieved policy successes, he has failed to establish the government as a credible force for good. Andrew Sullivan misses the cultural forest for the policy trees. I didn’t want Obama to be FDR; I just wanted him to reverse the worst parts of the Reagan revolution. Instead, at this rate Obama is going to wind up looking more like Richard Nixon than Ronald Reagan.

In Sullivan’s words:

Obama was not elected, despite liberal fantasies, to be a left-wing crusader. He was elected as a pragmatic, unifying reformist who would be more responsible than Bush.

And what have we seen? A recurring pattern. To use the terms Obama first employed in his inaugural address: the president begins by extending a hand to his opponents; when they respond by raising a fist, he demonstrates that they are the source of the problem; then, finally, he moves to his preferred position of moderate liberalism and fights for it without being effectively tarred as an ideologue or a divider.

This is essentially correct. Obama has achieved some notable policy successes, and I for one have greatly enjoyed the frothy fury of the Republican primary, but come November, the election will be over, and somebody will have to govern. And the fact is they’ll do so with a population that trusts the government less than ever before. It’s what David Brooks calls the instrument problem; 10% of Americans trust the government to do the right thing, even as they rely on the government to secure the borders, ensure the safety of food and drugs, and provide healthcare, social security, and unemployment insurance. Suzanne Mettler identifies the problem as the submerged state. Most of the government programs that benefit the middle class are either invisible or run so smoothly that more than half the people who use them don’t think they’re using a government program. If you don’t think the government supports you, why would you support the government?

I’d like to compare two Republican presidents, not on their conservative credentials, but on their legacy. By conventional measures, Nixon is a far better president. He founded the EPA, opened relations with China, unilaterally renounced the development of biological weapons, negotiated the first arms control treaty with the USSR, ended the Vietnam War (eventually), got American off the gold standard, reduced inflation, launched the War on Cancer, and saw American land on the moon. Reagan presided over ballooning budget deficits, used government power to crush the unions, cut taxes only to raise them, ignored AIDS for several years, supervised the Iran-Contra affair, presided over a massive arms race, slashed anti-poverty programs, pushed the war on drugs, and saw the Challenger explode.

Yet for all this, Nixon’s legacy is “I am not a crook,” and Reagan’s legacy is “The nine most terrifying words in the English language are: ‘I'm from the government and I'm here to help.’” Sure, Nixon was a paranoid lowlife with the moral instincts of a hammerhead shark, but he led many initiatives which America is rightfully proud of. Reagan’s accomplishments are far thinner, but he was the Great Communicator, and he established a political dialogue that is with us today, an undead ideology that flows through the Tea Party and cripples the ability to govern.

Sullivan thinks that Obama’s opponents will be punished for their carelessness with the truth, but I’m not so sure. Paul Krugman believes that Mitt Romney is running a post-truth campaign, and he’s the most reasonable of the Republican candidates, or at least the least insane. Being elected today requires that you believe a dozen contradictory things before breakfast (to link to Krugman again). I’ve not seen any backlash towards public figures for spouting obvious falsehoods, and even the New York Times is wondering if it should challenge people who lie in its articles.

I supported Obama because I believed that he could articulate a vision for American democracy in the 21st century. I thought that the author of Dreams from my Father, the 2004 Democratic Convention Keynote, and the speech on Reverend Wright, would be somebody who could inspire America in the same way that Kennedy and Reagan did. We needed, and still need, inspiration more than any specific policy solution. I believed that roused to action, the American people would find their own solutions to major problems, like healthcare, energy, education, and the war.

Instead, Barack Obama has presided over an ugly and secretive government. It is a government that uses drones to kill terrorists on the other side of the world, while making the absurd claim that “There hasn’t been a single collateral death because of the exceptional proficiency, precision of the capabilities we’ve been able to develop,” (according to senior counter-terrorism official John O. Brennan) despite ample evidence to the contrary. It is a government that has failed to address basic concerns about hidden risks and ‘shadow banks’ in the financial system. And while the rancor and insanity of the 112th Congress is not Obama’s fault, the White House is little better. On the Keystone XL pipeline, and Plan B birth control pill, the Obama administration has given the impression that it does not make decisions based on evidence, or what he believes would be right for the country, but what is most politically expedient. It is a short-sighted tactic that reduces his own credibility.

David Brooks, at the end of his editorial on the instrument problem, says:

“If Democrats can’t restore Americans’ trust in government, it really doesn’t matter what problems they identify and what plans they propose. No one will believe in the instrument they rely on for solutions.”

I do not want people to uncritically trust Big Government, but American has passed the point of reasonable skepticism to the point of political solipsism. Congress is less popular than polygamy, the BP oil spill and Maoism. If Obama cannot restore some basic faith in government, then he will be a failure, no matter how many policy successes he manages.


20111121

The Vaccine Controvery

This past Friday I had the chance to meet Mark Largent, a historian of science at Michigan State University, who after writing an excellent history of American eugenics, is working on a history of the anti-vaccination movement. The anti-vaccination movement is one of the more contentious flashpoints in popular culture, with views on vaccines ranging from the deliberate poisoning of children by doctors, to anti-science nonsense that threatens to reverse a century of healthcare gains. Largent’s methodology is to look at the people involved and try to see the world as they believe it, without doing violence. The question of whether vaccines cause autism is scientifically and socially irrelevant. But it is a proxy for a wider and more important spectrum of beliefs about personal responsibility and biomedical interventions, the interface between personal liberty and public goods, and the political consequences of these beliefs.

Some numbers: Currently, 40% of American parents have delayed one or more recommend vaccines, and 11.5% have refused a state mandated vaccine. 23 states containing more than half the population allow “philosophical exemptions” to mandatory vaccination, which are trivial to obtain. The number of inoculations given to children has increased from 9 in the mid 1980s, to 26 today. As a single father, Largent understands the anti-vaccines movement on a basic level: babies hate shots, and doctors administer dozens of them from seconds after birth to two years old.

The details of “vaccines-cause-autism” are too complex to go into here, but Largent is an expert on Andrew Wakefield, the now-discredited British physician who authored the withdrawn Lancet study which suggest a link between the MMR vaccines and autism, and Jenny McCarthy, who campaigned against the mercury-containing preservative thimerosal in the US. Now, as for the scientific issue, it is settled: vaccines do not cause autism. Denmark, which keeps comprehensive health records, shows no difference in autism cases between the vaccinated, partially vaccinated, and un-vaccinated. We don’t know what causes autism, or why cases of autism are increasing, but it probably is related to more rigorous screening and older mothers, as opposed to any external cause. Certainly, the epidemiological cause-and-effect for vaccines and autism is about as a strong as the link between cellphones and radiation, namely non-existent.

But parents, looking for absolute safety and certainty for their children, aren’t convinced by scientific studies, simply because it is effectively impossible to prove a negative to their standards. A variety of pro-vaccine advocates, Seth Mnookin and Paul Offit among them, have cast this narrative as the standard science denialism story, with deluded and dangerous parents threatening to return us to the bad old days of polio. This “all-or-nothing” demonization is unhelpful, and serves merely to alienate the parents doctors are trying to reach. Rather, Largent proposed that we need to have a wider social debate on the number and purpose of vaccines, and the relationship between doctors, parents, and the teachers and daycare workers who are the first line of vaccine compliance.

Now, thinking about this in the context of my studies, this looks like a classic issue of biopolitics and competing epistemologies, and is tied directly into the consumerization of the American healthcare system. According to Foucault, modernity was marked by the rise of biopolitics. “One might say that the ancient right to take life or let live was replaced by a power to foster life or disallow it to the point of death.” While the sovereign state—literally a man in a shiny hat with a sword—killed his enemies to maintain order, the modern state tends to the population like a garden, keeping careful statistics and intervening to maintain population health.

From a bureaucratic rationalist point of view, vaccines are an ideal tool, requiring a minimal intervention, and with massive and observable effects on the rolls of births and deaths, and the frequency and severity of epidemics. Parents don’t see these facts, particularly when vaccines have been successful. What they do see is that babies hate vaccines. I’m not being flip when I say that the suffering of children is of no account to the bureaucratic perspective, the official CDC claim is that 1/3 of babies are “fretful” after receiving vaccines. This epistemology justifies an unlimited expansion of the vaccination program, since any conceivable amount of fretfulness is offset by even a single prevented death. For parents and pediatricians, who must deal with the expense, inconvenience, and suffering of each shoot, the facts appear very different. These mutually incompatible epistemologies mean that pro and anti-vaccine advocates are talking past each other.

The second side of the story is how responsibility for maintaining health has been increasingly shifted onto patients. From the women’s health movement of the 1970s, with Our Bodies, Ourselves, to the 1997 Consumer Bill of Rights and Responsibilities, to Medicare Advantage plans, ordinary people are increasingly expected to take part in healthcare decisions that were previously the sole province of doctors. The anti-vaccine movement has members from the Granola Left and the Libertarian Right, but it is overwhelming composed of upper-middle class women, precisely the people who have seen the greatest increase in medical knowledge and choice over the past few decades. Representatives of the healthcare system should not be surprised that after empowering patients to make their own decisions, they sometimes make decisions against medical advice.

So how to resolve this dilemma? The pro-vaccine advocates suggest we either force people to get vaccinated, a major intrusion of coercive power into a much more liberalized medical system, or we somehow change the epistemology of parents. Both of these approaches are unworkable. Likewise, anti-vaccine advocates should lay off vaccines-cause-autism. They may have valid complaints, but at this point, the science is in, and continuing to push that line really pisses scientists off. Advocates need to understand the standards of scientific knowledge, and what playing in a scientific arena entails.

In the vaccine controversy, as in so many others, what we need is forum that balances both scientific and non-scientific knowledge, so that anti-vaccine advocates can speak their case without mangling science in the process. I don’t know what that forum would look like, who would attend, or how it would achieve this balance, but the need for better institutional engagement between science and society is clear.


20111012

Occupy Wall Street

It's been a month since the start of Occupy Wall Street, and this hmbl blggr is still trying to wrap his head around What It All Means. Occupy has inspired over one hundred similar protests across the United States and the world, prompted some serious discussion of economic inequality, and made pundits say some very silly things (Greenwald has a pretty good summary).

First off, why is everybody so angry? The hippie communists over at Business Insider have produced a series of 40-odd charts showing how unemployment has become permanent, corporate profits and the concentration of wealth have soared, and general wages have stayed flat over decades. I think we were all subconsciously aware of this, that something had gone profoundly wrong with the American Dream, but the Occupy movement is bringing it to the foreground.

And what is it that the Occupy movement wants? They've been castigated as radicals and anarchists who want society to give them a free lunch, but why don't we look at the actual data. Rortybomb performed an analysis on common phrases found in the We Are the 99% (essentially a web equivalent of the Occupy movement), and found that the key phrases were jobs, debt, work, children. In his analysis:

The demands are broadly health care, education and not to feel exploited at the high-level, and the desire to not live month-to-month on bills, food and rent and under less of the burden of debt at the practical level.

The people in the tumblr aren’t demanding to bring democracy into the workplace via large-scale unionization, much less shorter work days and more pay. They aren’t talking the language of mid-twentieth century liberalism, where everyone puts on blindfolds and cuts slices of pie to share. The 99% looks too beaten down to demand anything as grand as “fairness” in their distribution of the economy. There’s no calls for some sort of post-industrial personal fulfillment in their labor – very few even invoke the idea that a job should “mean something.” It’s straight out of antiquity – free us from the bondage of our debts and give us a basic ability to survive.
How mundane, how depressing, but also how liberating. Once people realize where they stand in relation to the big institutions of power, banks and governments and the like, they might begin to think critically about how to free themselves. The 99% might be peasants, but they are peasants with smartphones. Compared to the masses of history, they are connected, informed, and possess political power.

Perhaps the single most impressive lesson of the Arab Spring is that power looks invincible until it no longer does, and once the hollowness at the core of power is revealed, the whole edifice can collapse with incredibly rapidity. Egypt, Libya, and Tunisia relied on brutal police repression and terror. The best that Wall Street can do is ruin our credit scores and empty out our 401Ks, and for the nearly 20% of Americans out of work, or the 45% of 16-29 year old unable to find a job, good credit and 401Ks were never in the offering. As with the Foreclosure Resistance Movement, it turns out that the system of debt has surprisingly little coercive power, and very little to offer to the 99%.

I do not have a crystal ball. The problems with the global economy are complex, systemic, and very difficult to resolve. There are no fast or easy solutions. Similarly, nobody knows what the end game of Occupy Wall Street is, or even if the protests will survive through the winter. But what I do know Occupy Wall Street is an opportunity for the 99% to figure out something new, some way of building social capital rather than financial capital.