20120228

Beyond Bell Labs

One of the ideas that I’m perennially kicking around is social support for science, or more precisely, “What kinds of science?” and “Why should the government support it?” When these questions are asked, the answer usually centers around some type of Basic (or Pure, or Fundamental) Research: Research without obvious applications, research that underlies other, more useful forms of science, research that should be funded by the government because, as a non-rival and non-excludable public good, it will be underfunded by the private sector. As conventional wisdom has it, basic research is a core input for economic innovation, and economic innovation is good for everybody. But really, when you look beyond the platitudes, what are we trying to do with science?

A recent New York Times profile on Bell Labs has brought my thoughts on the matter into sharp relief. You should really just read the whole piece, but if you’re not familiar with Bell Labs, they invented much of the 20th century, including the semi-conductor, lasers, fiber optics, communications satellites, digital cameras, UNIX, and the C programming language. Why was Bell Labs so successful?

Quite intentionally, Bell Labs housed thinkers and doers under one roof. Purposefully mixed together on the transistor project were physicists, metallurgists and electrical engineers; side by side were specialists in theory, experimentation and manufacturing. Like an able concert hall conductor, he sought a harmony, and sometimes a tension, between scientific disciplines; between researchers and developers; and between soloists and groups… Bell Labs was sometimes caricatured as an ivory tower. But it is more aptly described as an ivory tower with a factory downstairs. It was clear to the researchers and engineers there that the ultimate aim of their organization was to transform new knowledge into new things.

[Mervin Kelley, Director of Bell Labs] gave his researchers not only freedom but also time. Lots of time — years to pursue what they felt was essential… In sum, he trusted people to create. And he trusted them to help one another create. To him, having at Bell Labs a number of scientific exemplars — “the guy who wrote the book,” as these standouts were often called, because they had in fact written the definitive book on a subject — was necessary. But so was putting them into the everyday mix. In an era before cubicles, all employees at Bell Labs were instructed to work with their doors open.

In essence, Bell Labs took the best in the world and aimed them towards “use-inspired basic research”, what science policy scholar, academic administrator, and NSF advisor Donald Stokes identified as Pasteur’s Quadrant. This kind of research aims at both a deeper understanding of the universe and immediate application to the social good, with Pasteur’s work on the bacterial origins of disease being the prototypical example. The standard narrative is that this type of ground-breaking, profitable, and socially useful research has ceased to occur. Stokes argues that Pasteur’s quadrant has no public advocate. The American scientific system as it exists in universities does “basic research“, using the policy justifications laid down in the cornerstone document of American science policy, Vannevar Bush’s Science: The Endless Frontier. Mission agencies, such as the Department of Defense, fund “applied science” that address pressing issues such as creating a plane invisible to radar, without concern for advancing theory. And since corporations have cut strategic research and development centers like Bell Labs or Xerox PARC in pursuit of short-term profits, nobody is doing what is actually the most significant type research.

Another explanation is that politics poisoned the Republic of Science. Instead of pursuing truth, scientists were forced to chase Federal grants that directed research towards conventional, less risky, and less appealing science. As PayPal founder Peter Thiel elucidates in a recent interview with Francis Fukuyama:

Peter Thiel: My libertarian views are qualified because I do think things worked better in the 1950s and 60s, but it’s an interesting question as to what went wrong with DARPA. It’s not like it has been defunded, so why has DARPA been doing so much less for the economy than it did forty or fifty years ago? Parts of it have become politicized. You can’t just write checks to the thirty smartest scientists in the United States. Instead there are bureaucratic processes, and I think the politicization of science—where a lot of scientists have to write grant applications, be subject to peer review, and have to get all these people to buy in—all this has been toxic, because the skills that make a great scientist and the skills that make a great politician are radically different. There are very few people who are both great scientists and great politicians. So a conservative account of what happened with science in the 20th century is that we had a decentralized, non-governmental approach all the way through the 1930s and early 1940s. At that point, the government could accelerate and push things tremendously, but only at the price of politicizing it over a series of decades. Today we have a hundred times more scientists than we did in 1920, but their productivity per capita is less that it used to be.

Francis Fukuyama: You certainly can’t explain the survival of the shuttle program except in political terms.

Peter Thiel: It was an extraordinary program. It cost more and did less and was probably less safe than the original Apollo program. In 2011, when it finally ended, there was a sense of the space age being over. Not quite, but it’s very far off from what we had decades ago. You could argue that we had more or better-targeted funding in the 1950s and 1960s, but the other place where the regulatory situation is radically different is that technology is much more heavily regulated than it used to be. It’s much harder to get a new drug through the FDA process. It takes a billion dollars. I don’t even know if you could get the polio vaccine approved today.

The scholar in me must add that Peter Thiel’s understanding of American science policy is very ahistorical, if not flat-out wrong. The current science policy and science funding apparatus that Thiel rails against is inherited from the Cold War, and that system was in turn developed from the research system set up during World War II. During this time, the Office of Scientific Research and Development was able to direct a much smaller scientific community in developing radar, computers, and the atomic bomb because its director, Vannevar Bush, personally knew every scientist of importance in the nation. And even then, the system directed the lion’s share of grants towards a handful of top universities, including John Hopkins, MIT, and Caltech. Vannevar Bush, for all his talents as a scientist and administrator, thought that the digital computer and rocketry were just fads, and would never amount to anything. If Vannevar Bush had actually been given sole, long-term control of American science policy, he would have delayed many fruitful fields of research, and likely have been the subject of high-profile hearings on cronyism and corruption in science, not from malfeasance per se, but just from the nature of his management style (you can see an echo of this in the allegations around DARPA director Regina E Dugan and RedXDefense, LLC). The NSF and NIH are not perfect organizations by any means, but they have managed to avoid such massive and obvious failure over the past 50 years. Pretty good for agencies that haven't had a clear national goal since the collapse of the Soviet Union.

To return to the questions posed at the start of this essay, what is it about basic research that is important for innovation? I’d like to offer an operational definition of research: Research is what scientists do. And what is it that scientists do? At the highest level, ignoring the details of any particular field of research: They observe things; they measure things; they change conditions and see how the measurements change; they repeat the changes and the measurements; they develop some sort of theory about what’s going on; and then they write up their results.* Sometimes the results get written up as a journal article, in which case it’s basic research. Other times, they get written up as a patent application, in which case, it’s applied research. If nobody write about it, than nobody learns about it, and it dies. Publishing is at the heart of science. The Royal Society started as a club to share the results of 17th century natural philosophers, and was widely emulated across the continent, which is why some scientific journals are still called the The Letters of Such and Such Organization.

What I want to draw out here is that neither articles nor patents fit neatly into Stokes’ concept of Pasteur’s Quadrant. Attempts like university technology transfer offices and the Bayh-Dole Act to bridge these forms of publishing are crude hacks to get both patents and articles out of the same body of work. While the form and content of a scientific article or patent is basically arbitrary, in that there’s no reason why they have to look the way that they do as opposed to some other form, there is something to the idea of a separation between Ideas and Things, and the different standards of scientific success to each realm. But is the minimization of Pasteur’s Quadrant and innovation merely an artifact of the publishing process? Again, I think not.

What is it that distinguishes “real science” from the kind of thing that’s done in a high-school classroom? What is it that distinguishes a scientist from a non-scientist? The questions are related: In a high-school experiment the answer is in the back of the book, while in a real experiment the answer is not yet known. And a scientist is somebody who has made a contribution to the collective body of knowledge by solving an unknown problem. Or to use an operational definition, a scientist is somebody who has earned their PhD by completing a dissertation and convincing a committee of current scientists of its validity and novelty.

Essentially every professional scientist has a PhD (counter-examples welcome), and many scientists spend much of their time helping younger scientists earn their dissertations. Working backwards from our operational definition of science as what scientists do, and adding in the idea that all scientists have to earn a dissertation, I’d like to propose that basic research is any scientific problem posed such that a reasonably bright individual might be expected to solve it in the course of earning a PhD.

Where this gets tricky is that not all scientific problems are created equal. Some have clear and immediate applications (how do we cure this disease?), others are easy (what do cows eat?), some are opaque (what is ‘time’ made of?), and some are hard (how do we make net-energy-positive fusion?).** Most problems lie somewhere in between, but after several hundred years of directed scientific endeavor, I think that I can safely say that a lot of the low-lying fruit, easy problems with obvious applications, have been solved. What is left is either very hard or irrelevant to useful ends. Because basic research is operationally defined as solvable, it must therefore be irrelevant.

Basic research serves a clear purpose. We need a class of problems to separate people capable of doing science from those who cannot, and to separate good scientists from bad scientists (unless you trust Vannevar Bush and/or Peter Thiel to just write checks to the smartest scientists they know). There are creativity and problem solving-skills that a person in the process of formulating a novel hypothesis and proving original conclusions cannot be obtained by replicating known results. And demanding that every PhD candidate be an Einstein or a Watson or a Crick is unfair to the vast majority of very capable scientists who will never win the Nobel Prize.

Basic research is necessary for renewing and sustaining a vibrant scientific community, but I think that scientists by-and-large are not taking the training wheels off their research. There are plenty of reasons to spend a career doing basic research: hiring decisions are based on publications, grants frequently demand results in a year or two, and the psychological reward of completing a project or becoming the world expert in some sub-sub-sub-field all bias scientists towards ‘do-able’ basic research rather than high-impact problems that may take years and yield no result. But what was once a program to create new scientists has become the raison d’etre of science, to the detriment of both innovation and the public support of science.

These incentives are both perverse and pervasive. My colleague John Carter McKnight wrote in an astute post on research and impact that:

“The system – precisely like the Soviet economy (look, I’m not going Gresham’s law here – I actually have a master’s degree in Soviet economic systems. Don’t ask.) doesn’t require quality in output past a bare minimum of peer review (which like Soviet production standards is gamed – since we all need to produce volume, we’re incentivized to accept crap output from others in return for their accepting our crap output) but rather quantity. Basic human nature points to a race to the bottom, or producing to the minimum acceptable standard.”

While John was writing about the humanities, the same argument applies to the sciences, where 40% of papers are not even cited once. Even scientists find other’s basic research boring and irrelevant.

During the Enlightenment, natural philosophy was reserved for wealthy gentlemen and those experimentalists who could secure a patron. These days, Big Science projects like the Large Hadron Collider, the Human Genome Project, or research into alternative energy are beyond the abilities of any single individual—breakthroughs require collaborations of large groups of people over years if not decades. Yet at the same time, big projects require consensus and generate their own momentum; they are ill-suited towards nimble, intellectual ventures. What kinds of institutions support good science?

Bell Labs was great in its time, but was ignominiously shut down in 2008, and no other company has stepped up. The Manhattan Project was a major success, but at any time other than a national emergency would have ended the careers of everybody involved due to waste and duplication of effort (four sites, three methods of separating fissile material, and two bomb designs). The government’s networks of in-house laboratories run by the Department of Energy, Department of Defense, NASA, and the National Institutes of Health don’t have the same kind of prestige or success that Bell Labs once held. This might be because they’re just as beholden to the yearly Congressional budget cycle as corporate labs are to quarterly reports, with the impossibility of becoming rich or famous, or it might be because they’re typically funded at a compromise level that stifles success and encourages conservatism rather than economy (what’s the tally on abandoned NASA rockets since the Space Shuttle?). The logics of maximizing short-term political benefit (aka Congressional pork) while holding down long-term costs has gotten us fiascos like the Joint Strike Fighter, a space agency that cares more about holding onto decaying facilities than doing science, and a glut of NIH lab space. Fiddling with these big institutions at the margins is just that, fiddling.

I think there’s something to these operational definitions, so let’s try and operational question. “How can we encourage worthwhile science while minimizing the long tail of boring crap?” The New York Times article that lead this piece talked about linking ivory-tower theories to the factory floor, and giving smart people time and freedom. I’ve talked about articles, patents, salaries, and other incentives. A great article in the New Yorker by Jonah Lehrer says that architecture itself can inhibit or produce creative thinking. But all of this is missing something key. To paraphrase Clausewitz, “Science is done by human beings." Human beings grow up, grow old, and die; scientific institutions are designed to live forever. What if immortal scientific institutions are failing science as a human endeavor?

Bell Labs managed to draw in the best minds of an entire generation, and then slowly faded away. The engineers that built the Apollo project couldn’t find a worthy successor for their energies. From Steve Jobs to the Lockheed Skunk Works or the classic The Soul of a New Machine, we see charismatic leaders taking teams of dedicated young engineers to the breaking point and beyond in pursuit of real innovation, and those teams falling apart afterwards. When I was applying to grad school, a mentor told me “Don’t go to [University X]. They did some great work in the early 90s, but they haven’t moved since.” Scientific institutions, as real entities staffed by human beings rather than abstract generators of knowledge, have a life-cycle.

The age of Nobel Prize winners and first-grant awards has been slowly rising, and while the exact causes and effects are uncertain, I think that might be one indicator that the institution of science is slowing down. In a scientific version of the Peter Principle, we take the best scientists and promote them into administration where they spend their time writing grants and herding post-docs rather than doing science. We make young scientists jump through an ever more complex series of hoops to get access to the good equipment and the big questions. The structure of science has become pyramidal, and old men guard the top. It’s no wonder that so much research is trivial, conservative, and aimed at the next rung in the career ladder rather than shaking the foundations of knowledge.

So this is my humble proposal for fixing science. Stop trying to turn undergrads into grad students into professors into emeriti. Stop running the whole endeavor like some sort of backwards business, with metrics for impact within a department and no reward for doing anything outside your little field. Stop making the reproduction of the social structure of science the highest goal of science.

What if we just gave large groups of young people some basic training, equivalent to passing comps in a PhD program, and then let them lose in the lab? I’m not talking about small scale here. Why not throw open the doors to the Goddard Space Flight Center and Lawrence Berkeley National Laboratory to the brightest and most ambitious hackerspace DIYers and say “All this is yours. Show me something cool.” Let them govern themselves through some kind of Parecon system, with only a minimal level of government oversight. If an experiment fails, well, science is uncertain. If they haven’t done anything worthwhile in 5 years, well, maybe their funding should be cut.

One of the basic principles here (and this might be naïve), is that people can actually work together in good faith towards common goals. I remember from my time at Caltech, where collaborative work was a core principle, that people naturally formed study groups with others that they could work well with. Make the core group of each lab similar in age and experience to deliberately minimize the effects of bad expert knowledge and hierarchies based on authority rather than expertise (Clarke’s First Law: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.) If somebody isn’t cut out for science, they’ll be gently eased out. Real peer review, rather than the kabuki theater currently practiced by the journals.

What I want make explicit is that each of these labs is by design a temporary entity. They’ll attract a flourishing community at their founding, and then slowly be pared down to basic core. While they might be centers of scientific learning, I wouldn’t let young scientists spend more than a few years at a lab, and labs would be barred from recruiting. Each generation must make its own scientific center. And when any given lab is haunted by just a few old-timers, throw open the doors to a new generation of scientists to hack ancient experimental equipment and learn from the Freeman Dyson-types hanging around.

This is just a utopian sketch, not a practical plan, and there are lots of open questions. Without strong ties to commercial or political end-users, might science just drift off into solipsistic irrelevance? Would breaking up labs by generation inspire true interdisciplinary research, or merely deprive junior scientists of expert mentoring? How would the funding and governing mechanism really work, and how would we prevent corruption and pathological accumulations of power? I don’t have good answers to these questions, but I think that there might be something to linking the dynamics of scientific (and economic and political) institutions to human cycles rather than some arbitrary standard of knowledge. And could it really be worse—more expensive, less innovative, and less personally fulfilling—than the current system?

((And I wouldn’t drag you, my loyal readers, through 3500 words on science policy without some kind of payoff in the form of a speculative proposal))

*I fully expect you guys to tear this definition to shreds.

**And yes, I’m blurring the lines between science and technology here. You know what I mean, deal with it.


20120218

On the Contrary

I have just finished reading through On the Contrary, a collection of philosophical essays on neuroscience. For me, the main interest in this work is understanding how society perceives the social and philosophical implications of neuroscience. I can recommend the text, particularly section II : "Meaning, Qualia, and Emotion" for anyone wanting to become acquainted with how advances in neuroscience have impacted the philosophy of consciousness and free will.

Chapter 13, by Rick Grush and Patricia S. Churchland, in particular amused me. This is a critique of Sir Roger Penrose's quantum consciousness hypothesis. Some of Penrose's speculations have gained foothold with those ignorant of neuroscience and physics, and it may be reasonable to point readers tempted by Penrose's speculations to this chapter, to slow the spread of misinformation.

My spin on this is that Penrose's hypothesis stems from an inexact understanding of computational complexity. Penrose believes that the human brain can solve problems that can only be solved by quantum computers. As I understand it, we do not yet completely understand what quantum computing buys us in terms of computational power. It seems to me that we have not adequately explored the limits of approximation algorithms with access to randomness to say whether classical physics can support cognition. Penrose invokes quantum computation to resolve speculative limits on classical computation, and looks for a quantum soul at the edges of scientific ignorance. I do not think Penrose has adequately motivate these claims with computational complexity, and the mechanisms he proposes -- quantum effects in microtubules, are so speculative that falsifying them is low priority for serious neuroscientists and physicists.

I side with the authors of this chapter: quantum consciousness is, at present, wishful thinking, motivated by the hunch that cognition is "magical", and so must depend on the only branch of physics which holds equal mystery : quantum physics. All reasoning after this is a series of speculations reminiscent of Descartes' speculation on the pineal gland. This critique of Penrose reminds me of the end of Exit Through the Gift Shop, wherein Banksy admits that, perhaps, there are some people who just ... should not do art. "I mean I always used to encourage everyone I met to make art, I used to think everyone should do it. I don't do that so much anymore". I'm afraid Penrose's quantum brain hypothesis is of a similar character : success in some scientific fields does not immediately qualify you to theorize in others.


20120207

DDNext: Doomed to Marketing Hell

The Great Edition Wars continue, and they show no sign of abating even though we have no idea what DDNext is even going to look like. But in the absence of solid information, I'd like to engage in some rumor mongering and theorizing about the type of product that DDNext will be, the type of people that they're selling to, and why unless a freaking miracle happens, DDNext will be a disaster.

I've heard that one reason WotC is working on a next edition is that D&D4e wasn't selling very well. And even if it was selling a lot better than it was, RPGs aren't terribly profitable, certainly not compared to cardboard crack like Magic: The Gathering. The corporate overlords at Hasbro wanted more money, and thought that a new edition would shake things up. Fair enough, but before we go deeper, what exactly is D&D?

The obvious answer is that D&D is a game. Games are good, games make money. Call of Duty: Modern Warfare 3 had the single biggest opening of any form of media ever, we're in a boardgames renaissance, and CCGs like Magic consistently make money. This is obvious, but this is also wrong. WotC doesn't sell a game, the game happens at the table with the gaming group, WotC is selling books that help you game, and this is a business model that is dead on arrival.

Once you sell people the books they need to play the game, they won't buy any books, and you won't make any money. Selling to GMs (setting guides, monster manuals, adventurers) excludes 80% of your potential market. Selling to players with splatbooks introduces option and power creep. Worst of all, RPG books are inherently expensive to produce; they're glossy, large, have a lot of art, and drive editors insane. I paid $35 for the DMG2 recently, which was worth it because it's a good book, but it's definitely not an impulse purchase for most people.

As long as WotC tries to make money on D&D by selling books, they'll fail. The business model just isn't there. With D&D4e, they tried to transition to a service-based model with D&D Insider, the Character Builder, Compendium, etc, but I think they just wound up cannibalizing their own book sales and alienating traditionalist gamers. One of the players in my online game was talking about how D&D4e could have won him over with better computer tools, an actual working digital tabletop at launch, tablet integration, etc, but they didn't. There hasn't even been a real D&D4e videogame, aside from the Facebook Heroes of Neverwinter, which is a shame given how crunchy and awesome the battle system is. D&D4e and Final Fantasy Tactics would go together like chocolate and peanut butter. At the end of the day, WotC is a not a software company, and doesn't have the skills to put together a really amazing web experience.

So let's turn towards the marketing of DDNext. From what I've seen, they're really aiming at an Old School rules experience. This is not a good move, IMO, since your most likely customers are people who have good memories of playing D&D back in the 80s, and you need to convince them to buy new books and move to a new system when A) they already own the books they have, B) they like what they're doing, C) everybody wants different things in their game, D) a large number of these people are bitter internet trolls who will fight anything new and different just because, and E) there's only so many of them. It's like trying to get people to switch cigarette brands; they just don't do it. It's way easier to focus on new smokers, I mean gamers.

So what of these new gamers? Dungeons and Dragons has a really strong brand, but it's a brand with problems. To quote the totally amazing Becky Chambers:

To people outside of the geek community, there is one phrase that conjures up a stereotype like no other: Dungeons & Dragons. I think folks see it as the crystal meth of geekery. You start innocently, just experimenting with a bit of Star Trek, then get sucked into comic book conventions in search of a more powerful kick, and before you know it, you’re rolling polyhedral dice in a dank basement, all hope of sex and hygiene lost forever.

"The crystal meth of geekery." Ouch, but not inaccurate. The good news is that we're a more geek-friendly culture than ever before. Videogames are socially acceptable, along with Harry Potter fandom, Dr Who, and Felicia Day. But somehow, D&D completely failed to capitalize on these potential new gamers. It is an utter tragedy that there's not a Harry Potter RPG, aside from Jared Sorenson's Broomstix. I understand why JK Rowling would not want a bunch of grubby nerds crawling all over her precious universe (because it doesn't make any sense when you look at it with a critical eye towards the economy, politics, etc), but think about how awesome stacks of "The Harry Potter Adventure Game" next to the novels would be. If they sold even 1% of what the books did, it would be the best selling RPG of all time-easily. ((And this is going to be an aside, but Hogwarts is totally gameable. A group of student wizards with different backgrounds and skills has to balance school work and their social lives while investigating strange goings-on at the school. Include rules for the canon characters and ways to create your own, and you've got a great game.))

In closing, WotC is in a real bind. They can choose to double down on traditional gamers, which will never get them the earnings they want, even if everybody gaming today drops what they're doing to play DDNext. They can transistion from a company that sells books to a company that enables D&D play with more profitable tools, but I doubt they have the skills and imagination to pull that off (and it also puts them in direct competition with MMORPGs etc). And they're tied to a brand that is both their biggest selling point and also immensely toxic.

It is a tragic fact that DDNext will rise or fall by the quality of its marketing rather than its gameplay. So if you were Don Draper, how would you save Dungeons & Dragons?


20120204

Radiolab : Words

In case you missed it, Radiolab has a wonderful episode on the connection between language and thought, and what it is like to exist without language. What I found most fascinating was the evidence for how language facilitates certain types of thought. I was also moved by the emotional story of an adult learning language for the first time, and subsequently being unable to regain or relate the nature of his subjective experience before language.


20120202

Lessons from Hemophilia

Recently I had the pleasure of spending some time with Corey Dubin, thinker, activist, and president of the Committee of Ten Thousand. Corey is a really interesting person (this article gives a decent overview of his past activities), child of the 60s, has amazing Zardoz hair (click at your own risk), and finally, he's a member of the "Triple H club": hemophiliac, HIV+, and hepatitis+, and has been for many many years. What happened to Corey Dubin was not an accident of fate, genetics, or public policy. Rather, it was the direct consequence of decisions made about the American blood supply, and his experience has important lessons to teach us about what counts as an acceptable risk in a highly connected world.


First a little context. Not all that long ago, hemophilia was an invariably fatal disease. Internal bleeding caused extremely painful swelling, blood corroded the bones and damaged the organs, and it was rare for somebody with the condition to live beyond their teens. The most famous historical hemophiliac was Prince Alexei Nikolaevich Romanov, who's condition played a minor, but significant role in the Russian Revolution, as it allowed Rasputin to rise in the court and alienated the Tsar from his most natural supporters in the aristocracy.

The 1960s saw the first effective treatments for hemophilia, with the discovery of Cryoprecipitate and then the concentrated blood-clotting proteins Factor VIII and IX. With these treatments, hemophiliacs were able to lead normal lives. Science and medicine had triumphed in reduced hemophilia from a fatal disease to a chronic condition. Of course, this lead to a whole new industry in supplying blood products. Plasma was collected from paid donors, mixed into very large batches containing blood for over 30,000 donors, processed into Factor, and then sold to doctors and patients.

This system worked fine until the early 80s, when a virulent new disease emerged on the marginal fringes of society. Homosexuals, IV drug users, and hemophiliacs were dying of strange lesions and secondary infections. The Center for Disease Control soon realized that it was a blood-born disease, but lacked the political clout to make the Food and Drug Administration and pharmaceutical companies act. The FDA vacillated, refusing to take Factor off the market for several years, and knowingly allowed contaminated blood to be shipped overseas. The end result was that an entire generation of hemophiliacs were infected with a fatal disease.

The point here is not that regulators made terrible, and in some cases unethical decisions in the midst of the AIDS crisis-although they did (and if you find this interesting, I highly recommend the documentary Bad Blood). The point is that the blood system was set up to fail.

The blood supply was contaminated from the beginning with hepatitis. Everybody involved knew so, but they believed that hepatitis was a fair trade for a cure for hemophilia. Perhaps they were right, but through a combination of greed, arrogance, and laziness authorities ignored techniques that could have purified the blood supply; things as simple as running plasma through columns of detergent. Similarly, mixing donor samples into large batches increased profits, but also increased the transmission rate by orders of magnitude. A single bad donor could infect thousands of people.

We have to be very careful about what counts as an "acceptable risk." New technologies present novel risks, and do not have adequate safety mechanisms. Risk is part of the process of innovation, but technologies that do not become safer over time deserve a critical revaluation. The other lesson is that we are all connected. Hemophiliacs are intimately connected to thousands of strangers through the blood supply, but to a lesser extent, we all have the same problems of trust and reliability. The food supply is highly commoditized, which means that food poisoning affects the entire nation, and account for an estimated 48 million illnesses, 128,000 hospitalizations, and 3000 deaths. As we become more dependent on internet-enabled and 'cloud' services, we become more vulnerable to hackers. The stability of countries on the other side of the world can shake the US economy, as proven by repeated oil price shocks. And pollution does not respect national boundaries; we all breathe the same air and drink the same water.

There is no cure for risk. Regulation is an inherently difficult task: the barrier of specialized expertise and the lure of industry money can eventually lull even the most dedicate watchdog agency into passivity. Independent citizens' groups and hard-hitting journalism are the only long-term antidotes to regulatory capture, and they require continual social investment and support. When industry or the experts say that "this is too complex" or that "this will be too expensive", we should demand clearer explanations and sensible alternatives. To do otherwise is to invite disaster. Maybe not today, maybe not tomorrow, but eventually.

Even if the blood supply had been safe in the 1980s, some hemophiliacs would have been exposed to AIDS and some would have died, but the scale of the human tragedy would have been far lower. To this day, the Center for Disease Control uses hemophiliacs as the 'canary in the coal mine' for signs of contamination in the national blood supply. But the story of hemophiliacs and the blood supply also serves as a lesson about techno-social systems and 'normal accidents', and how they can be prevented. Good system design and careful monitoring saves lives.