Beyond Bell Labs
One of the ideas that I’m perennially kicking around is social support for science, or more precisely, “What kinds of science?” and “Why should the government support it?” When these questions are asked, the answer usually centers around some type of Basic (or Pure, or Fundamental) Research: Research without obvious applications, research that underlies other, more useful forms of science, research that should be funded by the government because, as a non-rival and non-excludable public good, it will be underfunded by the private sector. As conventional wisdom has it, basic research is a core input for economic innovation, and economic innovation is good for everybody. But really, when you look beyond the platitudes, what are we trying to do with science?
A recent New York Times profile on Bell Labs has brought my thoughts on the matter into sharp relief. You should really just read the whole piece, but if you’re not familiar with Bell Labs, they invented much of the 20th century, including the semi-conductor, lasers, fiber optics, communications satellites, digital cameras, UNIX, and the C programming language. Why was Bell Labs so successful?
Quite intentionally, Bell Labs housed thinkers and doers under one roof. Purposefully mixed together on the transistor project were physicists, metallurgists and electrical engineers; side by side were specialists in theory, experimentation and manufacturing. Like an able concert hall conductor, he sought a harmony, and sometimes a tension, between scientific disciplines; between researchers and developers; and between soloists and groups… Bell Labs was sometimes caricatured as an ivory tower. But it is more aptly described as an ivory tower with a factory downstairs. It was clear to the researchers and engineers there that the ultimate aim of their organization was to transform new knowledge into new things.
[Mervin Kelley, Director of Bell Labs] gave his researchers not only freedom but also time. Lots of time — years to pursue what they felt was essential… In sum, he trusted people to create. And he trusted them to help one another create. To him, having at Bell Labs a number of scientific exemplars — “the guy who wrote the book,” as these standouts were often called, because they had in fact written the definitive book on a subject — was necessary. But so was putting them into the everyday mix. In an era before cubicles, all employees at Bell Labs were instructed to work with their doors open.
In essence, Bell Labs took the best in the world and aimed them towards “use-inspired basic research”, what science policy scholar, academic administrator, and NSF advisor Donald Stokes identified as Pasteur’s Quadrant. This kind of research aims at both a deeper understanding of the universe and immediate application to the social good, with Pasteur’s work on the bacterial origins of disease being the prototypical example. The standard narrative is that this type of ground-breaking, profitable, and socially useful research has ceased to occur. Stokes argues that Pasteur’s quadrant has no public advocate. The American scientific system as it exists in universities does “basic research“, using the policy justifications laid down in the cornerstone document of American science policy, Vannevar Bush’s Science: The Endless Frontier. Mission agencies, such as the Department of Defense, fund “applied science” that address pressing issues such as creating a plane invisible to radar, without concern for advancing theory. And since corporations have cut strategic research and development centers like Bell Labs or Xerox PARC in pursuit of short-term profits, nobody is doing what is actually the most significant type research.
Another explanation is that politics poisoned the Republic of Science. Instead of pursuing truth, scientists were forced to chase Federal grants that directed research towards conventional, less risky, and less appealing science. As PayPal founder Peter Thiel elucidates in a recent interview with Francis Fukuyama:
Peter Thiel: My libertarian views are qualified because I do think things worked better in the 1950s and 60s, but it’s an interesting question as to what went wrong with DARPA. It’s not like it has been defunded, so why has DARPA been doing so much less for the economy than it did forty or fifty years ago? Parts of it have become politicized. You can’t just write checks to the thirty smartest scientists in the United States. Instead there are bureaucratic processes, and I think the politicization of science—where a lot of scientists have to write grant applications, be subject to peer review, and have to get all these people to buy in—all this has been toxic, because the skills that make a great scientist and the skills that make a great politician are radically different. There are very few people who are both great scientists and great politicians. So a conservative account of what happened with science in the 20th century is that we had a decentralized, non-governmental approach all the way through the 1930s and early 1940s. At that point, the government could accelerate and push things tremendously, but only at the price of politicizing it over a series of decades. Today we have a hundred times more scientists than we did in 1920, but their productivity per capita is less that it used to be.
Francis Fukuyama: You certainly can’t explain the survival of the shuttle program except in political terms.
Peter Thiel: It was an extraordinary program. It cost more and did less and was probably less safe than the original Apollo program. In 2011, when it finally ended, there was a sense of the space age being over. Not quite, but it’s very far off from what we had decades ago. You could argue that we had more or better-targeted funding in the 1950s and 1960s, but the other place where the regulatory situation is radically different is that technology is much more heavily regulated than it used to be. It’s much harder to get a new drug through the FDA process. It takes a billion dollars. I don’t even know if you could get the polio vaccine approved today.
The scholar in me must add that Peter Thiel’s understanding of American science policy is very ahistorical, if not flat-out wrong. The current science policy and science funding apparatus that Thiel rails against is inherited from the Cold War, and that system was in turn developed from the research system set up during World War II. During this time, the Office of Scientific Research and Development was able to direct a much smaller scientific community in developing radar, computers, and the atomic bomb because its director, Vannevar Bush, personally knew every scientist of importance in the nation. And even then, the system directed the lion’s share of grants towards a handful of top universities, including John Hopkins, MIT, and Caltech. Vannevar Bush, for all his talents as a scientist and administrator, thought that the digital computer and rocketry were just fads, and would never amount to anything. If Vannevar Bush had actually been given sole, long-term control of American science policy, he would have delayed many fruitful fields of research, and likely have been the subject of high-profile hearings on cronyism and corruption in science, not from malfeasance per se, but just from the nature of his management style (you can see an echo of this in the allegations around DARPA director Regina E Dugan and RedXDefense, LLC). The NSF and NIH are not perfect organizations by any means, but they have managed to avoid such massive and obvious failure over the past 50 years. Pretty good for agencies that haven't had a clear national goal since the collapse of the Soviet Union.
To return to the questions posed at the start of this essay, what is it about basic research that is important for innovation? I’d like to offer an operational definition of research: Research is what scientists do. And what is it that scientists do? At the highest level, ignoring the details of any particular field of research: They observe things; they measure things; they change conditions and see how the measurements change; they repeat the changes and the measurements; they develop some sort of theory about what’s going on; and then they write up their results.* Sometimes the results get written up as a journal article, in which case it’s basic research. Other times, they get written up as a patent application, in which case, it’s applied research. If nobody write about it, than nobody learns about it, and it dies. Publishing is at the heart of science. The Royal Society started as a club to share the results of 17th century natural philosophers, and was widely emulated across the continent, which is why some scientific journals are still called the The Letters of Such and Such Organization.
What I want to draw out here is that neither articles nor patents fit neatly into Stokes’ concept of Pasteur’s Quadrant. Attempts like university technology transfer offices and the Bayh-Dole Act to bridge these forms of publishing are crude hacks to get both patents and articles out of the same body of work. While the form and content of a scientific article or patent is basically arbitrary, in that there’s no reason why they have to look the way that they do as opposed to some other form, there is something to the idea of a separation between Ideas and Things, and the different standards of scientific success to each realm. But is the minimization of Pasteur’s Quadrant and innovation merely an artifact of the publishing process? Again, I think not.
What is it that distinguishes “real science” from the kind of thing that’s done in a high-school classroom? What is it that distinguishes a scientist from a non-scientist? The questions are related: In a high-school experiment the answer is in the back of the book, while in a real experiment the answer is not yet known. And a scientist is somebody who has made a contribution to the collective body of knowledge by solving an unknown problem. Or to use an operational definition, a scientist is somebody who has earned their PhD by completing a dissertation and convincing a committee of current scientists of its validity and novelty.
Essentially every professional scientist has a PhD (counter-examples welcome), and many scientists spend much of their time helping younger scientists earn their dissertations. Working backwards from our operational definition of science as what scientists do, and adding in the idea that all scientists have to earn a dissertation, I’d like to propose that basic research is any scientific problem posed such that a reasonably bright individual might be expected to solve it in the course of earning a PhD.
Where this gets tricky is that not all scientific problems are created equal. Some have clear and immediate applications (how do we cure this disease?), others are easy (what do cows eat?), some are opaque (what is ‘time’ made of?), and some are hard (how do we make net-energy-positive fusion?).** Most problems lie somewhere in between, but after several hundred years of directed scientific endeavor, I think that I can safely say that a lot of the low-lying fruit, easy problems with obvious applications, have been solved. What is left is either very hard or irrelevant to useful ends. Because basic research is operationally defined as solvable, it must therefore be irrelevant.
Basic research serves a clear purpose. We need a class of problems to separate people capable of doing science from those who cannot, and to separate good scientists from bad scientists (unless you trust Vannevar Bush and/or Peter Thiel to just write checks to the smartest scientists they know). There are creativity and problem solving-skills that a person in the process of formulating a novel hypothesis and proving original conclusions cannot be obtained by replicating known results. And demanding that every PhD candidate be an Einstein or a Watson or a Crick is unfair to the vast majority of very capable scientists who will never win the Nobel Prize.
Basic research is necessary for renewing and sustaining a vibrant scientific community, but I think that scientists by-and-large are not taking the training wheels off their research. There are plenty of reasons to spend a career doing basic research: hiring decisions are based on publications, grants frequently demand results in a year or two, and the psychological reward of completing a project or becoming the world expert in some sub-sub-sub-field all bias scientists towards ‘do-able’ basic research rather than high-impact problems that may take years and yield no result. But what was once a program to create new scientists has become the raison d’etre of science, to the detriment of both innovation and the public support of science.
These incentives are both perverse and pervasive. My colleague John Carter McKnight wrote in an astute post on research and impact that:
“The system – precisely like the Soviet economy (look, I’m not going Gresham’s law here – I actually have a master’s degree in Soviet economic systems. Don’t ask.) doesn’t require quality in output past a bare minimum of peer review (which like Soviet production standards is gamed – since we all need to produce volume, we’re incentivized to accept crap output from others in return for their accepting our crap output) but rather quantity. Basic human nature points to a race to the bottom, or producing to the minimum acceptable standard.”
While John was writing about the humanities, the same argument applies to the sciences, where 40% of papers are not even cited once. Even scientists find other’s basic research boring and irrelevant.
During the Enlightenment, natural philosophy was reserved for wealthy gentlemen and those experimentalists who could secure a patron. These days, Big Science projects like the Large Hadron Collider, the Human Genome Project, or research into alternative energy are beyond the abilities of any single individual—breakthroughs require collaborations of large groups of people over years if not decades. Yet at the same time, big projects require consensus and generate their own momentum; they are ill-suited towards nimble, intellectual ventures. What kinds of institutions support good science?
Bell Labs was great in its time, but was ignominiously shut down in 2008, and no other company has stepped up. The Manhattan Project was a major success, but at any time other than a national emergency would have ended the careers of everybody involved due to waste and duplication of effort (four sites, three methods of separating fissile material, and two bomb designs). The government’s networks of in-house laboratories run by the Department of Energy, Department of Defense, NASA, and the National Institutes of Health don’t have the same kind of prestige or success that Bell Labs once held. This might be because they’re just as beholden to the yearly Congressional budget cycle as corporate labs are to quarterly reports, with the impossibility of becoming rich or famous, or it might be because they’re typically funded at a compromise level that stifles success and encourages conservatism rather than economy (what’s the tally on abandoned NASA rockets since the Space Shuttle?). The logics of maximizing short-term political benefit (aka Congressional pork) while holding down long-term costs has gotten us fiascos like the Joint Strike Fighter, a space agency that cares more about holding onto decaying facilities than doing science, and a glut of NIH lab space. Fiddling with these big institutions at the margins is just that, fiddling.
I think there’s something to these operational definitions, so let’s try and operational question. “How can we encourage worthwhile science while minimizing the long tail of boring crap?” The New York Times article that lead this piece talked about linking ivory-tower theories to the factory floor, and giving smart people time and freedom. I’ve talked about articles, patents, salaries, and other incentives. A great article in the New Yorker by Jonah Lehrer says that architecture itself can inhibit or produce creative thinking. But all of this is missing something key. To paraphrase Clausewitz, “Science is done by human beings." Human beings grow up, grow old, and die; scientific institutions are designed to live forever. What if immortal scientific institutions are failing science as a human endeavor?
Bell Labs managed to draw in the best minds of an entire generation, and then slowly faded away. The engineers that built the Apollo project couldn’t find a worthy successor for their energies. From Steve Jobs to the Lockheed Skunk Works or the classic The Soul of a New Machine, we see charismatic leaders taking teams of dedicated young engineers to the breaking point and beyond in pursuit of real innovation, and those teams falling apart afterwards. When I was applying to grad school, a mentor told me “Don’t go to [University X]. They did some great work in the early 90s, but they haven’t moved since.” Scientific institutions, as real entities staffed by human beings rather than abstract generators of knowledge, have a life-cycle.
The age of Nobel Prize winners and first-grant awards has been slowly rising, and while the exact causes and effects are uncertain, I think that might be one indicator that the institution of science is slowing down. In a scientific version of the Peter Principle, we take the best scientists and promote them into administration where they spend their time writing grants and herding post-docs rather than doing science. We make young scientists jump through an ever more complex series of hoops to get access to the good equipment and the big questions. The structure of science has become pyramidal, and old men guard the top. It’s no wonder that so much research is trivial, conservative, and aimed at the next rung in the career ladder rather than shaking the foundations of knowledge.
So this is my humble proposal for fixing science. Stop trying to turn undergrads into grad students into professors into emeriti. Stop running the whole endeavor like some sort of backwards business, with metrics for impact within a department and no reward for doing anything outside your little field. Stop making the reproduction of the social structure of science the highest goal of science.
What if we just gave large groups of young people some basic training, equivalent to passing comps in a PhD program, and then let them lose in the lab? I’m not talking about small scale here. Why not throw open the doors to the Goddard Space Flight Center and Lawrence Berkeley National Laboratory to the brightest and most ambitious hackerspace DIYers and say “All this is yours. Show me something cool.” Let them govern themselves through some kind of Parecon system, with only a minimal level of government oversight. If an experiment fails, well, science is uncertain. If they haven’t done anything worthwhile in 5 years, well, maybe their funding should be cut.
One of the basic principles here (and this might be naïve), is that people can actually work together in good faith towards common goals. I remember from my time at Caltech, where collaborative work was a core principle, that people naturally formed study groups with others that they could work well with. Make the core group of each lab similar in age and experience to deliberately minimize the effects of bad expert knowledge and hierarchies based on authority rather than expertise (Clarke’s First Law: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.) If somebody isn’t cut out for science, they’ll be gently eased out. Real peer review, rather than the kabuki theater currently practiced by the journals.
What I want make explicit is that each of these labs is by design a temporary entity. They’ll attract a flourishing community at their founding, and then slowly be pared down to basic core. While they might be centers of scientific learning, I wouldn’t let young scientists spend more than a few years at a lab, and labs would be barred from recruiting. Each generation must make its own scientific center. And when any given lab is haunted by just a few old-timers, throw open the doors to a new generation of scientists to hack ancient experimental equipment and learn from the Freeman Dyson-types hanging around.
This is just a utopian sketch, not a practical plan, and there are lots of open questions. Without strong ties to commercial or political end-users, might science just drift off into solipsistic irrelevance? Would breaking up labs by generation inspire true interdisciplinary research, or merely deprive junior scientists of expert mentoring? How would the funding and governing mechanism really work, and how would we prevent corruption and pathological accumulations of power? I don’t have good answers to these questions, but I think that there might be something to linking the dynamics of scientific (and economic and political) institutions to human cycles rather than some arbitrary standard of knowledge. And could it really be worse—more expensive, less innovative, and less personally fulfilling—than the current system?
((And I wouldn’t drag you, my loyal readers, through 3500 words on science policy without some kind of payoff in the form of a speculative proposal))
*I fully expect you guys to tear this definition to shreds.
**And yes, I’m blurring the lines between science and technology here. You know what I mean, deal with it.