Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

20130516

Google's Mad Scientist Island

Google's I/O 2013 conference was yesterday, and the tech journalistic consensus is trickling in. In terms of specific product launches, there's nothing with the "wow" factor of last year's Google Glass, but according Lance Ulhoff , "Google’s worldview is finally coming into focus. The tenuous threads that connect these dozens of different applications and services are strengthening and gradually being pulled closer together. Underneath it all is Google’s vast web of information and smarts, which is all about us." Google's products are getting sleeker, more graceful, less skeumorphic. I might even go so far as to say 'intuitive and emotional'. (A skeptic might say 'intrusive and creepy')

The highlights were in the post-keynote by Larry Page.

 First, "We should be building great things that don't exist." This is a pretty cool sentiment, particularly from a company like Google, which combines massive size with a desire to innovate. I'm reminded a little of classic era Bell Labs and Xerox PARC, where a steady cashflow from telecoms/office machines went to support radical ideas in electronics and personal computing. Google's search/advertising business gets us wearable computers and self-driving cars.

Second, Larry Page wants earth to have a mad scientist island . This is the !!! moment of the conference, and honestly, I'm not sure what to think, which is why I want to run this by you guys.

Personally, I agree with Page that research is slowed by laws and regulations, but the effect is probably not as big as he thinks. What really slows research down is our species innate conservatism. On the business side, this is exemplified by demands from Accounting and Marketing that the new product be profitable and interoperable with older versions of the system back to 19xx. On the academic side, it's the publish-or-perish paradigm, which has researchers focused on "do-able" projects as opposed to "needs-to-be-done" projects.

It'd be nice to start with a clean slate, without the pressure to make everything make work with existing systems, conform to building codes, or have to make money or sense this year. But I think that such a place, if it existed, would need oversight. The planet would be rightfully concerned if Mad Scientist Island started dumping toxins into the environment , or systemically violating human rights . Independent research enclaves could be a great idea-if they could be inspected without destroying their unique culture.


20120605

Guerrilla Science

Observing various scientific “controversies” over the past few years, I’ve seen a pattern repeated again and again between the scientific mainstream and dissenters. Whether it’s about global warming, a link between vaccines and autism, the safety of GMO crops, or any other issue, the conversation looks pretty much like this.

Scientists: “We have developed the following hypothesis due to a preponderance of evidence, and the scientific consensus says we should enact the following policies.”
Dissenters: “Well, that’s just your theory. What about this evidence which argues something entirely different?”
Scientists:  “That evidence is methodologically and theoretically flawed, and we have dismissed it for scientific reasons.”
Dissenters: “No, you’re dismissing it because you’ve been bought off by Big Pharma/Monsanto/Al Gore!”
Scientists: “Well, you’re anti-science, and you aren’t responsible enough to participate in this debate. Come back when you’re willing to accept the truth.”

And then the two camps go their separate ways: The dissenters to fringe websites where they catalog the corruption of mainstream science and develop their alternative bodies of evidence, and the scientists to letters pages of scientific journals, where they write pleads for better science education and communication, so that science can drive out all the frauds out of the public sphere and we can have rational policies again, like we did in the good old days.

The other thing that I’ve observed while following these controversies is that mainstream science is losing. The American political system has become entrenched around the truth or falsity of anthropogenic global warming, despite an overwhelming scientific consensus that it is happening and it is a problem. Fewer people completely vaccinate their kids each year, even though the original Wakefield study has been totally discredited and disease rates are on the rise. And fears of GMO have become a permanent part of European politics, and a rising force in America and China.

This leads to one of two conclusions: either despite all the calls by respected scientists for more communication and education efforts by the scientific community have been falling short and should be increased, or the conventional framing of the problem is essentially wrong and misleading. I believe it is the latter; that the arena of public scientific debate has changed in recent decades, that the dissenters are “guerrilla scientists” who like guerrilla fighters use asymmetric strategies to avoid the superior strength of their foe, and that to win, mainstream science must find an equally adaptable counter-strategy.

To explain this idea, I’m going to need to talk about science and guerrilla warfare. Please bear with me.

As a PhD student in science and technology studies, one of the biggest questions that we face is “What is science?” There are lots of good definitions: facts about the natural world, systemic knowledge, a method for generating said facts and knowledge and ensuring their reliability using experiment and observation, but all of these definitions conceal the process of how science is made; how specific claims become true facts or false hypothesizes. To understand that process, we need to go inside science, inside scientific writing, and inside the lab. Bruno Latour has developed some of the most powerful lenses on the actual practice of science in his books Laboratory Life and Science in Action.

For Latour, science is a rhetoric; a way of convincing other people to believe your claims. The form of the modern scientific paper has been careful developed to be as convincing as possible. A successful scientific paper integrates itself in a network of previous scholarship, explains how it will extend the results of previous scholars, presents a method that can be duplicated by others, shows results (typically in graphical form), and then discusses those results. I’m going to use as my example a paper by the other author of this blog, “A model for the origin and properties of flicker-induced geometric phospenes”, but any other paper would work just as well.

The paper begins by summarizing the previous work in the field, starting in 1819 and moving through the 1970s and into the present day. The introduction establishes the paper vis-à-vis previous work on the visual system, and a question about the origins of flicker in either the retina or the visual cortex. The method section describes using the Wilson-Cowan equation for modeling flicker in a simulated neural network, how to implement that equation in a computer program, the images that are produced by the model, and finally a discussion of how those images might relate to what happens in the brain and what we can perceive when we close our eyes and press on our eyeballs (or use ze goggles).

At every turn, the paper preemptively parries those who would try to doubt it. “You think that this problem is unimportant. Here are people who have worked on it before me.” “You doubt my math? Write your own program and check my results.” “You disagree with my choice of the Wilson-Cowan equation? Here are 1040 papers that also use it. Do you disagree with all of them?” The paper is structured and linked such that to disagree with it either requires opposing a much larger and more authoritative body of scholarship than “things that Mike Rule, Matt Stoffregen, and Bard Ermentrout say”, or going into their lab, checking that all their machines work, that the graphs in the paper are actually reproducible, and essentially duplicating their effort and expertise.

This post-modernist view of science can be disconcerting at first; what about objective physical reality? What about the search for truth? Has science just become another kind of blind faith, based on appeal to past authority?  No. What Latour tells us is that scientists do not know a priori what is ‘real’ and ‘true’. Those words are only applied to hypothesis after an intense process of purification and examination by the community of scientists that rules out every other possible explanation.

The picture of science that Latour develops is an interlocking network of claims about the natural world, linked into a mutually reinforcing pattern with more accepted claims at the center and weaker claims at the fringes. Science as a totality is like a fractal star fort, defensible from any angle.  But this picture is partial and passive.

The life of science is in active disputes, for example, “What is the structure of DNA circa 1953?” Disputes are opposing versions of reality, and they are only settled by the destruction and absorption of alternative facts and theories into the final ‘science truth’. Actor-Network Theory describe this process as one of enrollment wherein scientists enlist facts, instruments, and people as allies in their cause, with the aim of building the strongest rhetorical network.

Looking at this, it struck me that the contest is much like a battle, with the scientist deploying his or her enrolled facts like a general committing his soldiers. There are many points of congruence between these models: critical questions and strategic points, the ability to generate new results and supply, predictive power and firepower, but they key point is that in science as in war there are rules.

Science has its own kind of Geneva Convention. Not an explicit treaty, but social norms that describe how science should be done, and how the contest should be decided. I’m not going to provide a complete set of norms, but a some of the more important ones might be: results must be reproducible; judge the idea and not the person; cite your sources; do not present others’ work as your own; remain open-minded; accept the accolades of your peers humbly. A scientist whole held onto a clearly discredited theory would not be respected, and it is considered bad form to pillage a rival’s lab and enslave their postdocs.

In action, the difference between mainstream scientists and dissenters is that dissenters don’t play by the rules. Dissenters care more about their personal commitments than the structure of science as a whole. They do not cede the field gracefully if their facts are overturned. And they accept a wider variety of evidence as a basis for their claims, including social, moral, economic, and political factors. They may not be good scientists, their theories are frequently shoddy and mystical, but it is important to recognize that they are engaged in essentially the same kind of work as mainstream scientists: making cause-and-effect claims about the natural world. Calling dissenters ‘anti-science’ implies that scientists should ignore and belittle them as unworthy of serious critique. Calling them guerrillas suggests a very different approach for understanding their aims and engaging with them.

Guerrilla warfare is political warfare. In conventional war, the goal is defeat of the enemy through decisive battle, and strategy is the art of staging the decisive battle on favorable terms. The aim of guerrilla warfare is to demonstrate the political illegitimacy of the people in charge while building popular support for revolution. Strategy is focused on swift strikes to demonstrate the ineffectiveness of the governance and provoke reprisals against the people, the preservation of the guerrilla’s own forces, and the use of time to wear out the enemy’s will to resist. While conventional military firepower is still important in guerrilla war, it is of secondary importance compared to psychological and political factors. In guerrilla warfare, the winner is the side that everybody believes has won; not the side that maintains control of the battlefield afterwards.

Guerrilla warfare is a complicated subject, and no two conflicts are alike, but some common patterns can be drawn. Lt Col John Boyd developed a theory of warfare based on learning systems, and he noticed that as a force slides towards defeat, it becomes isolated and insular, it stops taking in information from the outside world, and is eventually confined to irrelevance. Information and morality are central; as the American military learned in Vietnam in the 1960s and in Iraq in 2003 and 2004, firepower is useless if targets cannot be located, and support can only be gained through demonstrating moral strength and sensitivity. To beat guerrillas, the government must demonstrate its superiority through active policies that improve the lot of the people while avoiding internal corruption. Successful counter-insurgency strategies, such as the Iraq Surge implemented by General Petraeus, aims to isolate guerrillas, to draw wavering fighters back into the government’s camp, and to find and kill the most hardcore commanders who could not be converted.

Combining these two theories, Latour’s Actor-Network Theory and Boyd’s OODA Loop, the shape of the problem and its solution begin to emerge. Scientific guerrillas exist because scientific expertise is a key buttress of democratic decision-making. 21st century American culture is such that a policy must appeal both to the will of the people and an external reality as informed by expert, i.e. scientific, opinion. But science, to put it bluntly, is hard. It requires a long and grueling apprenticeship and then access to expensive and specialized laboratory equipment. And worse from a political perspective, science is not democratic. No matter how many geeks wished that the OPERA neutrinos were truly faster-than-light, that result stubbornly remains an experimental error. It’s far easier to don the guise of expertise when it’s needed to support a policy position than it is to genuinely discover the truth according to the strict rules of science.

In this context, saying that the dissenters need to play by mainstream standards of evidence is like saying that we just need Al Qaeda to put on uniforms, gather around Tora Bora, and have that decisive battle we’ve been waiting for. It’s a fantasy, because it involves convincing guerrillas who are winning to fight a conventional battle that they will surely lose. Science education, science funding, and more public understanding of science are equivalent to sending in more troops, more weapons, more airstrikes. It can stabilize the situation, but it is unlikely to actually defeat the guerrillas.

I worry that science is becoming isolated in a Boydian sense. Scientific papers only cite other scientific papers; most scientists work and live in enclaves around major research universities. There are extremely good reasons for this, from a conventional perspective it generates stronger science, but it has also made science more brittle, less relevant, and less politically legitimate.

Like it or not, scientists have become embroiled in a wide variety of guerrilla disputes on major issues, and I’ve not seen a robust strategy for countering the guerrillas. I love and respect science; it’s the best tool for understanding and improving the world that we have, but it is under attack in ways that most people can’t even see, and is not effectively defending itself. Guerillas can be beaten, but it will require an active strategy of integrity, candor, and two-way communication. The stakes could not be higher. As Henry Kissinger said on the Vietnam War in 1969, “The guerilla wins if he does not lose; the conventional army loses if it does not win.”



20120228

Beyond Bell Labs

One of the ideas that I’m perennially kicking around is social support for science, or more precisely, “What kinds of science?” and “Why should the government support it?” When these questions are asked, the answer usually centers around some type of Basic (or Pure, or Fundamental) Research: Research without obvious applications, research that underlies other, more useful forms of science, research that should be funded by the government because, as a non-rival and non-excludable public good, it will be underfunded by the private sector. As conventional wisdom has it, basic research is a core input for economic innovation, and economic innovation is good for everybody. But really, when you look beyond the platitudes, what are we trying to do with science?

A recent New York Times profile on Bell Labs has brought my thoughts on the matter into sharp relief. You should really just read the whole piece, but if you’re not familiar with Bell Labs, they invented much of the 20th century, including the semi-conductor, lasers, fiber optics, communications satellites, digital cameras, UNIX, and the C programming language. Why was Bell Labs so successful?

Quite intentionally, Bell Labs housed thinkers and doers under one roof. Purposefully mixed together on the transistor project were physicists, metallurgists and electrical engineers; side by side were specialists in theory, experimentation and manufacturing. Like an able concert hall conductor, he sought a harmony, and sometimes a tension, between scientific disciplines; between researchers and developers; and between soloists and groups… Bell Labs was sometimes caricatured as an ivory tower. But it is more aptly described as an ivory tower with a factory downstairs. It was clear to the researchers and engineers there that the ultimate aim of their organization was to transform new knowledge into new things.

[Mervin Kelley, Director of Bell Labs] gave his researchers not only freedom but also time. Lots of time — years to pursue what they felt was essential… In sum, he trusted people to create. And he trusted them to help one another create. To him, having at Bell Labs a number of scientific exemplars — “the guy who wrote the book,” as these standouts were often called, because they had in fact written the definitive book on a subject — was necessary. But so was putting them into the everyday mix. In an era before cubicles, all employees at Bell Labs were instructed to work with their doors open.

In essence, Bell Labs took the best in the world and aimed them towards “use-inspired basic research”, what science policy scholar, academic administrator, and NSF advisor Donald Stokes identified as Pasteur’s Quadrant. This kind of research aims at both a deeper understanding of the universe and immediate application to the social good, with Pasteur’s work on the bacterial origins of disease being the prototypical example. The standard narrative is that this type of ground-breaking, profitable, and socially useful research has ceased to occur. Stokes argues that Pasteur’s quadrant has no public advocate. The American scientific system as it exists in universities does “basic research“, using the policy justifications laid down in the cornerstone document of American science policy, Vannevar Bush’s Science: The Endless Frontier. Mission agencies, such as the Department of Defense, fund “applied science” that address pressing issues such as creating a plane invisible to radar, without concern for advancing theory. And since corporations have cut strategic research and development centers like Bell Labs or Xerox PARC in pursuit of short-term profits, nobody is doing what is actually the most significant type research.

Another explanation is that politics poisoned the Republic of Science. Instead of pursuing truth, scientists were forced to chase Federal grants that directed research towards conventional, less risky, and less appealing science. As PayPal founder Peter Thiel elucidates in a recent interview with Francis Fukuyama:

Peter Thiel: My libertarian views are qualified because I do think things worked better in the 1950s and 60s, but it’s an interesting question as to what went wrong with DARPA. It’s not like it has been defunded, so why has DARPA been doing so much less for the economy than it did forty or fifty years ago? Parts of it have become politicized. You can’t just write checks to the thirty smartest scientists in the United States. Instead there are bureaucratic processes, and I think the politicization of science—where a lot of scientists have to write grant applications, be subject to peer review, and have to get all these people to buy in—all this has been toxic, because the skills that make a great scientist and the skills that make a great politician are radically different. There are very few people who are both great scientists and great politicians. So a conservative account of what happened with science in the 20th century is that we had a decentralized, non-governmental approach all the way through the 1930s and early 1940s. At that point, the government could accelerate and push things tremendously, but only at the price of politicizing it over a series of decades. Today we have a hundred times more scientists than we did in 1920, but their productivity per capita is less that it used to be.

Francis Fukuyama: You certainly can’t explain the survival of the shuttle program except in political terms.

Peter Thiel: It was an extraordinary program. It cost more and did less and was probably less safe than the original Apollo program. In 2011, when it finally ended, there was a sense of the space age being over. Not quite, but it’s very far off from what we had decades ago. You could argue that we had more or better-targeted funding in the 1950s and 1960s, but the other place where the regulatory situation is radically different is that technology is much more heavily regulated than it used to be. It’s much harder to get a new drug through the FDA process. It takes a billion dollars. I don’t even know if you could get the polio vaccine approved today.

The scholar in me must add that Peter Thiel’s understanding of American science policy is very ahistorical, if not flat-out wrong. The current science policy and science funding apparatus that Thiel rails against is inherited from the Cold War, and that system was in turn developed from the research system set up during World War II. During this time, the Office of Scientific Research and Development was able to direct a much smaller scientific community in developing radar, computers, and the atomic bomb because its director, Vannevar Bush, personally knew every scientist of importance in the nation. And even then, the system directed the lion’s share of grants towards a handful of top universities, including John Hopkins, MIT, and Caltech. Vannevar Bush, for all his talents as a scientist and administrator, thought that the digital computer and rocketry were just fads, and would never amount to anything. If Vannevar Bush had actually been given sole, long-term control of American science policy, he would have delayed many fruitful fields of research, and likely have been the subject of high-profile hearings on cronyism and corruption in science, not from malfeasance per se, but just from the nature of his management style (you can see an echo of this in the allegations around DARPA director Regina E Dugan and RedXDefense, LLC). The NSF and NIH are not perfect organizations by any means, but they have managed to avoid such massive and obvious failure over the past 50 years. Pretty good for agencies that haven't had a clear national goal since the collapse of the Soviet Union.

To return to the questions posed at the start of this essay, what is it about basic research that is important for innovation? I’d like to offer an operational definition of research: Research is what scientists do. And what is it that scientists do? At the highest level, ignoring the details of any particular field of research: They observe things; they measure things; they change conditions and see how the measurements change; they repeat the changes and the measurements; they develop some sort of theory about what’s going on; and then they write up their results.* Sometimes the results get written up as a journal article, in which case it’s basic research. Other times, they get written up as a patent application, in which case, it’s applied research. If nobody write about it, than nobody learns about it, and it dies. Publishing is at the heart of science. The Royal Society started as a club to share the results of 17th century natural philosophers, and was widely emulated across the continent, which is why some scientific journals are still called the The Letters of Such and Such Organization.

What I want to draw out here is that neither articles nor patents fit neatly into Stokes’ concept of Pasteur’s Quadrant. Attempts like university technology transfer offices and the Bayh-Dole Act to bridge these forms of publishing are crude hacks to get both patents and articles out of the same body of work. While the form and content of a scientific article or patent is basically arbitrary, in that there’s no reason why they have to look the way that they do as opposed to some other form, there is something to the idea of a separation between Ideas and Things, and the different standards of scientific success to each realm. But is the minimization of Pasteur’s Quadrant and innovation merely an artifact of the publishing process? Again, I think not.

What is it that distinguishes “real science” from the kind of thing that’s done in a high-school classroom? What is it that distinguishes a scientist from a non-scientist? The questions are related: In a high-school experiment the answer is in the back of the book, while in a real experiment the answer is not yet known. And a scientist is somebody who has made a contribution to the collective body of knowledge by solving an unknown problem. Or to use an operational definition, a scientist is somebody who has earned their PhD by completing a dissertation and convincing a committee of current scientists of its validity and novelty.

Essentially every professional scientist has a PhD (counter-examples welcome), and many scientists spend much of their time helping younger scientists earn their dissertations. Working backwards from our operational definition of science as what scientists do, and adding in the idea that all scientists have to earn a dissertation, I’d like to propose that basic research is any scientific problem posed such that a reasonably bright individual might be expected to solve it in the course of earning a PhD.

Where this gets tricky is that not all scientific problems are created equal. Some have clear and immediate applications (how do we cure this disease?), others are easy (what do cows eat?), some are opaque (what is ‘time’ made of?), and some are hard (how do we make net-energy-positive fusion?).** Most problems lie somewhere in between, but after several hundred years of directed scientific endeavor, I think that I can safely say that a lot of the low-lying fruit, easy problems with obvious applications, have been solved. What is left is either very hard or irrelevant to useful ends. Because basic research is operationally defined as solvable, it must therefore be irrelevant.

Basic research serves a clear purpose. We need a class of problems to separate people capable of doing science from those who cannot, and to separate good scientists from bad scientists (unless you trust Vannevar Bush and/or Peter Thiel to just write checks to the smartest scientists they know). There are creativity and problem solving-skills that a person in the process of formulating a novel hypothesis and proving original conclusions cannot be obtained by replicating known results. And demanding that every PhD candidate be an Einstein or a Watson or a Crick is unfair to the vast majority of very capable scientists who will never win the Nobel Prize.

Basic research is necessary for renewing and sustaining a vibrant scientific community, but I think that scientists by-and-large are not taking the training wheels off their research. There are plenty of reasons to spend a career doing basic research: hiring decisions are based on publications, grants frequently demand results in a year or two, and the psychological reward of completing a project or becoming the world expert in some sub-sub-sub-field all bias scientists towards ‘do-able’ basic research rather than high-impact problems that may take years and yield no result. But what was once a program to create new scientists has become the raison d’etre of science, to the detriment of both innovation and the public support of science.

These incentives are both perverse and pervasive. My colleague John Carter McKnight wrote in an astute post on research and impact that:

“The system – precisely like the Soviet economy (look, I’m not going Gresham’s law here – I actually have a master’s degree in Soviet economic systems. Don’t ask.) doesn’t require quality in output past a bare minimum of peer review (which like Soviet production standards is gamed – since we all need to produce volume, we’re incentivized to accept crap output from others in return for their accepting our crap output) but rather quantity. Basic human nature points to a race to the bottom, or producing to the minimum acceptable standard.”

While John was writing about the humanities, the same argument applies to the sciences, where 40% of papers are not even cited once. Even scientists find other’s basic research boring and irrelevant.

During the Enlightenment, natural philosophy was reserved for wealthy gentlemen and those experimentalists who could secure a patron. These days, Big Science projects like the Large Hadron Collider, the Human Genome Project, or research into alternative energy are beyond the abilities of any single individual—breakthroughs require collaborations of large groups of people over years if not decades. Yet at the same time, big projects require consensus and generate their own momentum; they are ill-suited towards nimble, intellectual ventures. What kinds of institutions support good science?

Bell Labs was great in its time, but was ignominiously shut down in 2008, and no other company has stepped up. The Manhattan Project was a major success, but at any time other than a national emergency would have ended the careers of everybody involved due to waste and duplication of effort (four sites, three methods of separating fissile material, and two bomb designs). The government’s networks of in-house laboratories run by the Department of Energy, Department of Defense, NASA, and the National Institutes of Health don’t have the same kind of prestige or success that Bell Labs once held. This might be because they’re just as beholden to the yearly Congressional budget cycle as corporate labs are to quarterly reports, with the impossibility of becoming rich or famous, or it might be because they’re typically funded at a compromise level that stifles success and encourages conservatism rather than economy (what’s the tally on abandoned NASA rockets since the Space Shuttle?). The logics of maximizing short-term political benefit (aka Congressional pork) while holding down long-term costs has gotten us fiascos like the Joint Strike Fighter, a space agency that cares more about holding onto decaying facilities than doing science, and a glut of NIH lab space. Fiddling with these big institutions at the margins is just that, fiddling.

I think there’s something to these operational definitions, so let’s try and operational question. “How can we encourage worthwhile science while minimizing the long tail of boring crap?” The New York Times article that lead this piece talked about linking ivory-tower theories to the factory floor, and giving smart people time and freedom. I’ve talked about articles, patents, salaries, and other incentives. A great article in the New Yorker by Jonah Lehrer says that architecture itself can inhibit or produce creative thinking. But all of this is missing something key. To paraphrase Clausewitz, “Science is done by human beings." Human beings grow up, grow old, and die; scientific institutions are designed to live forever. What if immortal scientific institutions are failing science as a human endeavor?

Bell Labs managed to draw in the best minds of an entire generation, and then slowly faded away. The engineers that built the Apollo project couldn’t find a worthy successor for their energies. From Steve Jobs to the Lockheed Skunk Works or the classic The Soul of a New Machine, we see charismatic leaders taking teams of dedicated young engineers to the breaking point and beyond in pursuit of real innovation, and those teams falling apart afterwards. When I was applying to grad school, a mentor told me “Don’t go to [University X]. They did some great work in the early 90s, but they haven’t moved since.” Scientific institutions, as real entities staffed by human beings rather than abstract generators of knowledge, have a life-cycle.

The age of Nobel Prize winners and first-grant awards has been slowly rising, and while the exact causes and effects are uncertain, I think that might be one indicator that the institution of science is slowing down. In a scientific version of the Peter Principle, we take the best scientists and promote them into administration where they spend their time writing grants and herding post-docs rather than doing science. We make young scientists jump through an ever more complex series of hoops to get access to the good equipment and the big questions. The structure of science has become pyramidal, and old men guard the top. It’s no wonder that so much research is trivial, conservative, and aimed at the next rung in the career ladder rather than shaking the foundations of knowledge.

So this is my humble proposal for fixing science. Stop trying to turn undergrads into grad students into professors into emeriti. Stop running the whole endeavor like some sort of backwards business, with metrics for impact within a department and no reward for doing anything outside your little field. Stop making the reproduction of the social structure of science the highest goal of science.

What if we just gave large groups of young people some basic training, equivalent to passing comps in a PhD program, and then let them lose in the lab? I’m not talking about small scale here. Why not throw open the doors to the Goddard Space Flight Center and Lawrence Berkeley National Laboratory to the brightest and most ambitious hackerspace DIYers and say “All this is yours. Show me something cool.” Let them govern themselves through some kind of Parecon system, with only a minimal level of government oversight. If an experiment fails, well, science is uncertain. If they haven’t done anything worthwhile in 5 years, well, maybe their funding should be cut.

One of the basic principles here (and this might be naïve), is that people can actually work together in good faith towards common goals. I remember from my time at Caltech, where collaborative work was a core principle, that people naturally formed study groups with others that they could work well with. Make the core group of each lab similar in age and experience to deliberately minimize the effects of bad expert knowledge and hierarchies based on authority rather than expertise (Clarke’s First Law: When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.) If somebody isn’t cut out for science, they’ll be gently eased out. Real peer review, rather than the kabuki theater currently practiced by the journals.

What I want make explicit is that each of these labs is by design a temporary entity. They’ll attract a flourishing community at their founding, and then slowly be pared down to basic core. While they might be centers of scientific learning, I wouldn’t let young scientists spend more than a few years at a lab, and labs would be barred from recruiting. Each generation must make its own scientific center. And when any given lab is haunted by just a few old-timers, throw open the doors to a new generation of scientists to hack ancient experimental equipment and learn from the Freeman Dyson-types hanging around.

This is just a utopian sketch, not a practical plan, and there are lots of open questions. Without strong ties to commercial or political end-users, might science just drift off into solipsistic irrelevance? Would breaking up labs by generation inspire true interdisciplinary research, or merely deprive junior scientists of expert mentoring? How would the funding and governing mechanism really work, and how would we prevent corruption and pathological accumulations of power? I don’t have good answers to these questions, but I think that there might be something to linking the dynamics of scientific (and economic and political) institutions to human cycles rather than some arbitrary standard of knowledge. And could it really be worse—more expensive, less innovative, and less personally fulfilling—than the current system?

((And I wouldn’t drag you, my loyal readers, through 3500 words on science policy without some kind of payoff in the form of a speculative proposal))

*I fully expect you guys to tear this definition to shreds.

**And yes, I’m blurring the lines between science and technology here. You know what I mean, deal with it.


20111121

The Vaccine Controvery

This past Friday I had the chance to meet Mark Largent, a historian of science at Michigan State University, who after writing an excellent history of American eugenics, is working on a history of the anti-vaccination movement. The anti-vaccination movement is one of the more contentious flashpoints in popular culture, with views on vaccines ranging from the deliberate poisoning of children by doctors, to anti-science nonsense that threatens to reverse a century of healthcare gains. Largent’s methodology is to look at the people involved and try to see the world as they believe it, without doing violence. The question of whether vaccines cause autism is scientifically and socially irrelevant. But it is a proxy for a wider and more important spectrum of beliefs about personal responsibility and biomedical interventions, the interface between personal liberty and public goods, and the political consequences of these beliefs.

Some numbers: Currently, 40% of American parents have delayed one or more recommend vaccines, and 11.5% have refused a state mandated vaccine. 23 states containing more than half the population allow “philosophical exemptions” to mandatory vaccination, which are trivial to obtain. The number of inoculations given to children has increased from 9 in the mid 1980s, to 26 today. As a single father, Largent understands the anti-vaccines movement on a basic level: babies hate shots, and doctors administer dozens of them from seconds after birth to two years old.

The details of “vaccines-cause-autism” are too complex to go into here, but Largent is an expert on Andrew Wakefield, the now-discredited British physician who authored the withdrawn Lancet study which suggest a link between the MMR vaccines and autism, and Jenny McCarthy, who campaigned against the mercury-containing preservative thimerosal in the US. Now, as for the scientific issue, it is settled: vaccines do not cause autism. Denmark, which keeps comprehensive health records, shows no difference in autism cases between the vaccinated, partially vaccinated, and un-vaccinated. We don’t know what causes autism, or why cases of autism are increasing, but it probably is related to more rigorous screening and older mothers, as opposed to any external cause. Certainly, the epidemiological cause-and-effect for vaccines and autism is about as a strong as the link between cellphones and radiation, namely non-existent.

But parents, looking for absolute safety and certainty for their children, aren’t convinced by scientific studies, simply because it is effectively impossible to prove a negative to their standards. A variety of pro-vaccine advocates, Seth Mnookin and Paul Offit among them, have cast this narrative as the standard science denialism story, with deluded and dangerous parents threatening to return us to the bad old days of polio. This “all-or-nothing” demonization is unhelpful, and serves merely to alienate the parents doctors are trying to reach. Rather, Largent proposed that we need to have a wider social debate on the number and purpose of vaccines, and the relationship between doctors, parents, and the teachers and daycare workers who are the first line of vaccine compliance.

Now, thinking about this in the context of my studies, this looks like a classic issue of biopolitics and competing epistemologies, and is tied directly into the consumerization of the American healthcare system. According to Foucault, modernity was marked by the rise of biopolitics. “One might say that the ancient right to take life or let live was replaced by a power to foster life or disallow it to the point of death.” While the sovereign state—literally a man in a shiny hat with a sword—killed his enemies to maintain order, the modern state tends to the population like a garden, keeping careful statistics and intervening to maintain population health.

From a bureaucratic rationalist point of view, vaccines are an ideal tool, requiring a minimal intervention, and with massive and observable effects on the rolls of births and deaths, and the frequency and severity of epidemics. Parents don’t see these facts, particularly when vaccines have been successful. What they do see is that babies hate vaccines. I’m not being flip when I say that the suffering of children is of no account to the bureaucratic perspective, the official CDC claim is that 1/3 of babies are “fretful” after receiving vaccines. This epistemology justifies an unlimited expansion of the vaccination program, since any conceivable amount of fretfulness is offset by even a single prevented death. For parents and pediatricians, who must deal with the expense, inconvenience, and suffering of each shoot, the facts appear very different. These mutually incompatible epistemologies mean that pro and anti-vaccine advocates are talking past each other.

The second side of the story is how responsibility for maintaining health has been increasingly shifted onto patients. From the women’s health movement of the 1970s, with Our Bodies, Ourselves, to the 1997 Consumer Bill of Rights and Responsibilities, to Medicare Advantage plans, ordinary people are increasingly expected to take part in healthcare decisions that were previously the sole province of doctors. The anti-vaccine movement has members from the Granola Left and the Libertarian Right, but it is overwhelming composed of upper-middle class women, precisely the people who have seen the greatest increase in medical knowledge and choice over the past few decades. Representatives of the healthcare system should not be surprised that after empowering patients to make their own decisions, they sometimes make decisions against medical advice.

So how to resolve this dilemma? The pro-vaccine advocates suggest we either force people to get vaccinated, a major intrusion of coercive power into a much more liberalized medical system, or we somehow change the epistemology of parents. Both of these approaches are unworkable. Likewise, anti-vaccine advocates should lay off vaccines-cause-autism. They may have valid complaints, but at this point, the science is in, and continuing to push that line really pisses scientists off. Advocates need to understand the standards of scientific knowledge, and what playing in a scientific arena entails.

In the vaccine controversy, as in so many others, what we need is forum that balances both scientific and non-scientific knowledge, so that anti-vaccine advocates can speak their case without mangling science in the process. I don’t know what that forum would look like, who would attend, or how it would achieve this balance, but the need for better institutional engagement between science and society is clear.


20111024

I, Scientist Evangelist.

So I guess I should start with a confession. I'm not a scientist, I studied English literature at University and teaching the English language to Korean children is the way in which I feed myself from day to day. Since I lack a white coat, couldn't begin to interpret the mathematics of M-Theory and have never worn reading glasses in my life the public at large would perhaps assume I have no understanding of science whatsoever and am therefore not worth listening to. This is exactly the combustible mix of technology and ignorance that Carl Sagan alluded to before his death. CERN induced black hole paranoia, climate change denial, bizarre nuclear weapons policies, bamboozling allocation of public funds and of course the ability of creationism's lupine offspring intelligent design to hide itself snugly in the most absurd of sheep skins can all be attributed at least partially to the lack of scientific understanding held by the “general public”. This unquestioning belief that if one does not have the letters Sc somewhere after their name then the very idea of empirical thinking had better be left to someone more qualified.

For me the pertinent question is: How can a non-scientist like me go about improving the public understanding of science when I myself admit to being as qualified to lecture on Physics as I am to fly a plane? I believe the answer is in coming out of the closet as a non-scientist who is passionate about science. No one expects to see nothing but young, drunk, Juilliard undergrads stumbling out of the music venues and nightclubs of the world. You do not need a grade eight piano certificate to purchase an Ipod any more than I need to hold my university transcripts to read a book. What is there then to stop a history student or a telemarketing sales consultant from putting his friends hands on a cold beer at the end of a hard shift and explaining the second law of thermodynamics? Perhaps the not unfounded fear that the friend may raise an eyebrow or even walk away. This is possible but not beyond remedy. I believe a public aversion to science, and the belief that if it is 'difficult' it can't be fun exists merely through social conditioning. I don't think there is a gene that splits the human race down the line of Jersey Shore or Johannes Kepler.


See photographic evidence

I doubt I have to say the diagram is not to scale and if we could all overlook the questionable appearance of the comet’s tail I can say I presented this to a room of Korean elementary school students who are learning English as a second or third language. Some of them had never heard of the concept of orbit but by the end of the class I'm quite confident they all had a decent understanding. Not only that, but they were visibly excited by these new ideas. Who wouldn't be? This whiteboard diagram is where we live. Why wouldn't a person be curious about their place in the universe?

A future vision of a society that embraces science and is as literate in scientific ideas as it is in The Beatles discography is no doubt a long way off. However I believe we must all of us make some heavy sacrifices (not limited to appearing to be a 'geek' in front of sexually attractive members of the human race) and become evangelists of the joy of science. This isn't an endorsement of so called 'aggressive' atheism. I neither condemn nor condone what you might call the “Dawkins movement” but do believe it has more to say about religion than it does science. Instead the next time a friend admires the night sky, remind them that they are staring back through time. A piece of information which of course will be of no practical use in their day to day life but as Bertrand Russell tells us. “There is much pleasure to be gained from useless knowledge.” So go grab someone you love, buy them a drink and tell them how you know they've been drinking Isaac Newton's pee.


20110929

The origin and properties of flicker-induced geometric phosphenes

A Model for the Origin and Properties of Flicker-Induced Geometric Phosphenes (PDF).

Many people see geometric patterns when looking at flickering lights. The patterns depend on the frequency, color, and intensity of the flickering. Different people report seeing similar shapes, which are called “form constants”. 

Flicker hallucinations are best induced using a Ganzfeld (German for “entire field”): an immersive, full-field, uniform visual stimulation. Frequencies ranging from 8 to 30 Hz are most effective. 

This effect is used by numerous sound-and-light machines sold for entertainment purposes. Some of these devices claim to alter the frequency of brain waves. There is no scientific evidence for this. However, the flickering stimulus may increase the amplitude of oscillations that are already present in the brain, to the point where geometric visual hallucinations can occur.

Figure 1. Illustrations of basic phosphene patterns (form constants) as they appear subjectively (left), and their transformation to planar waves in cortical coordinates (right).

How do flickering lights cause geometric visual hallucinations? Roughly, flickering lights confuse the eye and the brain, causing them to see geometric shapes that aren't there. The phenomenon is related to how bold patterns can create optical illusions, but in this case the pattern varies in time, rather than space.

Our hypothesis is that the flickering interacts with natural ongoing oscillations in visual cortex, exciting a specific frequency of brain waves. This increases the activity in visual cortex. This increase in excitability is similar to what occurs on some hallucinogens

The simpler patterns, like ripples and spots, are mathematically related to the Turing patterns in animal coat patterns. More complex patterns occur when these instabilities interact with the brain's pattern recognition circuits. For more information, including the mathematical details of the model, head over and check out the paper.

The theory predicts that low frequencies (8-12 Hz) are more likely to induce spot-like patterns, and that high frequencies (12-30 Hz) are more likely to induce striped or ripple patterns. Anecdotally, I have tested this on myself and find it to be approximately correct for a white flicker Ganzfeld stimulus. I also find that low-frequency red-green flicker reliably induces checkerboard patterns, and that red-blue flicker reliably induces an almost quasicrystaline pattern of triangles and hexagons.

Many thanks to Matt Stoffregen and Bard Ermentrout for making this possible, as well as the CNBC undergraduate training program. The paper can be cited as

Rule, M., Stoffregen, M. and Ermentrout, B., 2011. A model for the origin and properties of flicker-induced geometric phosphenes. PLoS Comput Biol, 7(9), p.e1002158.

 


 

Extras that didn't make it into the final paper:

Below is a variant of Figure 6 inspired by Robert Munafo's visualization of the parameter space of the Gray-Scott reaction-diffusion model. It shows how the evoked patterns vary depending on the flicker frequency (horizontal axis) and amplitude (vertical axis). Activity levels of excitatory and inhibitory cells are colored in yellow and blue, respectively.

It's computed by integrating the periodically-driven 2D Wilson-Cowan on the GPU. We drive the system with a uniform periodic stimulus, but vary the integration time step $\Delta t$ so that each location perceives a different frequency. The continuous simulation causes patterns to "spill over" into the nearby areas (where patterns are not spontaneously stable), so we didn't include this version in the paper.

Primary visual cortex isn't a perfectly square, periodic domain, and we also simulated patches resembling the shape of this brain area. Here, it was important to create a soft absorbing boundary, otherwise the sharp boundary itself promotes pattern formation. Horizontal and vertical stripes are stable, and this may account for why radial and tunnel-like patterns are slightly more common.


Videos of simulation:

Here is a video of the striped patterns emerging on a rectangular domain

 

 

And the hexagonal patterns:

 

Here is the stripe pattern again, transformed into perceptual coordinates:

 

Emerging patterns are associated with a "critical wavenumber", which sets the spatial scale of the instabilities in the model.  If you visualize the amplitude of the Fourier coefficients of the 2D system as patterns emerge, you see that isolated peaks in spatial frequency appear (along with their harmonics). The example below is for a striped pattern:





20110713

Two more from Breakthrough

The last two mandatory blogs from my time at Breakthrough are up. Click the links for the full thing.

Technological Mojo
Liberalism as it exists today isn't so much an ideology as a flag of convenience. The progressive position on policies promoting the welfare state and cultural attitudes towards abortion, gun control, and gay marriage unites a solid minority coalition, but one without big ideas except for a vague notion of 'play nice' and 'be yourself.' As Michael Lind of the New America Foundation put it, the Democratic Party is about checking off the wish-lists of its constituent interests groups. "What is the liberal position on the environment? It's what the Sierra Club wants." Rather discuss values, liberals have retreated to policy literalism, appealing to a slew of "scientific" and "rational" policies to achieve narrow, tactical ends: price carbon dioxide, extend healthcare to the uninsured, stop the war, decrease classroom sizes. Liberals have ceded values and emotion to conservatives, with disastrous electoral and policy results at every level of government. Liberal scientism is a rhetoric of failure.

It's Dangerous Being Modern
The Breakthrough Dialog began with a very interesting idea, that of second modern risk, which was not fully fleshed out. At the heart of second modernity is the idea that humanity has become responsible for its own fate. Thanks to the power of science and technology, we have banished the ancient gods and forces of nature. Food, shelter, and physical security are all assured in the first world, and so humanity has directed its efforts to fulfilling post-material needs for status, power, and a moral society. In many ways, this is a zero-sum game; unlike material goods, status and power cannot be increased, only redistributed. Different cultures have profoundly different concepts of morality. For all our efforts to improve the second modern condition, it seems that the best we can do is run to stay in place. Post-material failure is one kind of second modern risk.

But while people worry about their job security, and their child's chances of getting into Harvard, and what their neighbors are up too, second modernity has its own apocalyptic horsemen. Flood, famine, fire and plague are primitive problems. In their place, we have substituted the business cycle, anthropogenic climate change, and total war. Second modern risks are more worrying, not just because they are bigger, mankind finally has the power to wipe itself out, but because they are human in origin, and therefore, in some sense, are our responsibility. My fear is that decades or centuries from now, the weary, broken survivors of whatever ended our technological civilization will look back and say, "But why didn't they change?" How then, can we as individuals and as a collective, come to grips with both kinds of second modern risks?


20110423

Trust the Man in the White Lab Coat, He is Your Friend: or, Restoring Public Faith in Science

Science in the 20th century produced miracles. Physicists discovered the fundamental building blocks of the universe, chemists invented almost every modern object with plastics, biologists cracked the genetic code, and engineers literally flew to the moon. But at some point, the relationship between science and society went off the rails. Maybe it was a variety of food scares in the European Union, or perhaps the mandatory climate change denial for American conservatives. But whatever the cause, scientists lost the public trust. Those of us who account ourselves policy realists believe that accurate science is vital to proper policy formation. How then, can the public trust in science be restored?

In “See-Through-Science”, James Wilson and Rebecca Willis of Demos argue that public engagement with science has to move upstream. Rather than scientific knowledge flowing from the technical elite to an accepting public, scientists and ordinary people should be talking about the values, visions, and vested interests of emerging fields of research as early as possible. The goal is to create better, more socially robust, science that doesn’t clash with public values at a later date, such as occurred with embryonic stem cell research. The idea is to re-engage people with the scientific ideas that will drive the future.

“Taking European Knowledge Society Serious” is a similar effort by a star-studded EU academic panel to diagnose how European science can be both socially responsive and a driver of innovation in the 21st century. Their recommendations are far reaching, but center around the idea that ‘risk assessment’ has to incorporate broader values, and that political elites should be careful that they don’t predetermine the framings of scientific controversy.

Personally, I’m doubtful of the ability of citizens’ juries, value mapping, or the other kinds of participatory efforts to positively alter the course of science, or the relationship between science and society. The day to day activities of science are fairly dull for those who are not already invested in them. Public participation would pick from the same select pool as criminal juries; the retired, the unemployed, and the flakey, and the effects of participation would not extend beyond their immediate social network. Science is driven by foremost, the immutable facts of nature, and their discovery and use. Secondly, it is driven by priority of novel results and the internal advancement of scientists within the community, and finally, it is driven by money, and the decisions by which grant panels, venture capitalists, and corporate executive allocate money. According to liberal political and economic theory, democracy and the free market already serve as adequate proxies for ‘public participation’ in deciding the direction of research.

But the weaknesses in these European STS policy pieces go deeper than an inability to alter the course of research. Rather, they don’t even attempt to figure out why the public distrusts science. This is a core issue, because without diagnosing the disease, there can be no purposeful attempt at a cure. And finding a cure is important, because the opposite of science is not apathy, but rather a particularly subversive and dangerous form of magical thinking.

People distrust science because science is inherently fallible. Every reversion of a theory, every recall of a new drug or product, every breakdown in a complex socio-technical system demonstrates that science is weaker than the magic thinking associated with religion, dark green ecocentrism, climate change denial, and neo-classical economics. The incomplete, esoteric, and contradictory nature of these beliefs systems is in fact their strength, since any failure in their magic can be explained away. Science, without these ambiguities, must suffer until a paradigm shift.

A second aspect is the persistent disintegration of trust in our society. During the Cold War, political leaders (in alliance with scientists) were able to use the threat on immanent nuclear annihilation to create obedience. It is no surprise that the decline in the credibility of science happened at the same time as defense intellectuals were rendered irrelevant by the sudden collapse of the Soviet Union. People began to look for new theories that matched their own personal beliefs, that weren’t as hard to understand and didn’t change as rapidly as science. A few canny politicos realized that by destroying civic trust and the belief in an empirical, historical past, they could craft the past anew each election cycle, avoiding all responsibility for their mistakes. And so far, we’ve been rich enough and robust enough not to suffer any existential disasters from thinking magically, despite the purposeless wars in the Iraq and Afghanistan, the flooding of New Orleans, the financial collapse, the BP oil spill, the Fukushima nuclear disaster, etc etc.

The problem with directly attacking false beliefs and magical thinking is this tends to alienate the audience you are trying to court, and may even entrench their status as an oppressed minority. However, changing minds is very, very hard, and the first priority must be stopping the spread of the infection. We can’t censor, but we can ridicule, and demand to see the credentials of these peddlers of false beliefs. The ideals of equality and neutrality espoused by the mainstream media are fictions which have stopped being strictly useful. Bullshit must be publically exposed as such. Perhaps we need a new journalism award, the Golden Shovel, for the best demolition of bullshit and lies.

At the same time, we need to recast public education towards a realistic understanding of the limits of science, technology, and state power. People have impossible expectations for science, they demand that it solve ill-formed problems, such as those dealing with the regulation of potentially toxic chemicals, in the absence of useful models. Or they want their drugs safe, effective, and now. Or they believe the Federal government has the power to plug a hole thousands of feet beneath the sea. At the same times as people learn about the limits of science, they should also be taught about the line between falsifiable science, and unfalsifiable magical thinking. Of course, this will not be easy, especially at a high school level. I am barely coming to grips with these issues, and I’ve spent several years studying them. But more important than any factual knowledge, is the ability to reason, to think critically, and to distinguish valid arguments from invalid one. Until every member of the public can articulate their values, and the supporting evidence for them, efforts to input public values into science will be useless at best.


20110227

The Paradox of Dual-Use Research in the 21st century

Predrag Bokšić | perceptron
A few days ago, I attended a short conference called Dangerous Liaisons held at the Biodesign Institute. Speakers included researchers in genetics and synthetic biology, the chains of the National Science Advisory Board for Biosecurity, the senior FBI agent for WMD threats, and a AAAS fellow in biosecurity. The subject of the talk was dual-use research, and how it can be controlled. The problem is that while genetics and synthetic biology offer tremendous benefits for health and new chemical products, at the same time these technologies might empower criminals and terrorists, or even lead to an accidental bio-disaster. How can we regulate dual-use technologies for the safety of mankind?

(As a historical aside, it's only recently that dual-use has taken on these negative connotations. Dual-use used to be good. "Oh, you mean we can use these rockets to kill commies and explore the solar system? Awesome!" But civilian technologies with clear military implications is relatively new phenomenon.)

The primary concern of all the presenters was that whatever form the regulations take, it not impede 'good science.' There were several good justifications for this: regulations that are too stringent will be disliked and evaded by the community, the science is advancing too quickly for central bodies to monitor and control, and impairing biology will both leave America at a disadvantage economically and in terms of responding to an actual incident.

The core problem of dual-use, as identified by NSABB, is research that might make biological agents more deadly or transmissible. Specific research projects include reconstituting the 1918 flu, or improving the deadliness of the mousepox virus, research which could be easily transferred to weaponizing smallpox. In the NSABBs view, the benefits of such research must be carefully balanced against the risks, and such weighing should be carried out at the most basic level, by researchers developing experiments and by existing Institutional Review Boards. The role of groups like NSABB is coordinate and develop guidelines.

NSABB's guidelines might help protect against the accidental release of bioweapon, but what about deliberate attackers? Much of the talk was focused around creating a "culture of security" in biology labs. Background checks to work with select agents may miss many danger signs, and with new techniques, even non-select organism might be dangerous. All presenters spoke about the need for scientists to be alert for dangers in the lab. Special Agent Edward You, in particular, described his job not as catching potential bioterrorists, but about creating a framework so that scientists know who to call at the FBI if they see something. A second side of the culture of security is getting private gene synthesis firms to check orders against known pathogen genomes, and not create smallpox genomes for example, something that firms have current volunteered to do.

On the one hand, this kind of voluntary regulation is the best, and maybe only workable option, on the other hand, I have real concerns about what it means for the actual day to day practice of lab work. Quite literally, it requires that PIs monitor their students, and make sure that they're not spies, terrorists, or psychopaths. Is this really a fair burden to place on scientists, or is it a rerun of the "Red Scare." One attendee asked quite penetrating questions about whether or not he should let Iranian PhD students work at his lab. The universality of science, the concept that a scientist should be judge by the merit of their ideas and not their personal background or place of origin, is not compatible with these kinds of concerns.

While private monitoring among firms is an option now, as the technology becomes cheaper and more widespread (and it will), how can the industry regulate itself against the existence of "grey hat", fly-by-night companies. I'm reminded of the situation with "Research Chemicals", synthetic hallucinogens which are structurally similar to banned substances, but not covered by law, and their production by various shady chemical firms. Particularly, the developing world, where intellectual property restrictions are routinely evade, may offer a fertile breeding group for these malefactors.

So, is there hope for the future? Dr. Kathleen Baily has stated that graduate students with $10,000 in equipment could synthesize substantial quantities of a biological agent. (Although it is worth noting that synthesizing an agent is not carrying out an attack. Many of the more difficult challenges in biological warfare involve distributing an agent, not producing it). Whatever the exact resources requuired, on the spectrum from "the Unabomber in his cabin" to "the Iranian nuclear program", bioterrorism trends towards the lower end. However, while terrorist groups including Al Queda have enthusiastically pursued bioweapons, biological and chemical attacks have so far been extremely dissapointed. The Aum Shinrikyo nerve gas attack on the Tokyo subways killed only 13 people, likely fewer than a conventional bomb. I agree with the presenters that the best defense against dual use research is ironically actively pursuing this kind of research, to develop counter-measures against an attack. Despite media hype, terrorists and lone wolves have been not shown even the minimal organization necessary to carry out a bioweapons attack. We can, at least for the moment, trust the biology community.


20110225

Democrats, Experts, and STS

Predrag Bokšić | perceptron
Governing is no easy task. While in some idealized, Athenian past, every decision required of the body politic might have drawn solely on common sense, these days every decision is intertwined with knowledge known only to specialists in the relevant field; it is locked behind walls of expertise. The body politic, if it is not to flail randomly in an insensate throes, must rely on the advice of experts. How then can rule by a small elite be reconciled with democracy?

The modern expert advisor is the spiritual descendant of Machiavelli. The brutally realist Italian revolutionized the Mirror for Princes genre, speaking directly in the vernacular, and cloaking his rhetoric in an objective "view from nowhere." To prove his credibility, Machiavelli erased himself, claiming merely to transmit the facts of history and psychology into applicable lessons on power. Early scientists, as exemplified by the British Royal Society of Boyle's era, used the same technique to 'merely transmit the facts of nature,' displaying for the public that which was self-evidently true.

The Machiavellian advisor works primarily at the point of power, at the person of the sovereign, but in a modern democracy, the sovereign is a fiction. The people rule, through their representatives. Though the relation of the people and their representatives is far from straightforward, (representatives speak for the people, make decisions for the people, and serve as targets of blame for the people, among their diverse function), a representative who strays too far from the desires of his or her constituents will soon fall. Therefore, expert advice applied at this level, once it departs from common knowledge, becomes useless. The experts and those who listen to them will be discarded at the first opportunity.

Instead, in a democracy, experts must also address the validity of their claims to the public. The end product of advice, and the advisory process itself, must appear credible. Science (roughly, the process of discovering facts about the natural world) in it's Enlightenment legacy, and the scientifically derived technologies around us, is one means of certifying the validity of expert claims, and representative decisions. Yet, because scientific claims speak to fundamental truths about the world, and can thereby override deliberation, astute politicians have learned to deploy counter-claims and counter-experts. Moreover, political figures has disseminated a narrative that discredits the ability of science to make any epistemically true and relevant claims about the world.

How then can scientists operate in a climate of such hostility? Dewey provides an model; by visualizing society as composed of a network of identities, with individuals belonging to multiple identities at once, he suggests that science can be democratized by tying as many people as possible to the "scientist" network. But what exactly is it that individuals should be educated in? There is no way for people to learn more than a scanty sampling of science. Rather, the chief science, the skill of kings, is learning to evaluate experts and their claims. There are universal patterns to how expert knowledge is created, and the vitamin that the body politic needs today is not more public scientific knowledge, but more public science, technology, and society scholarship.


20110124

Research ethics in nanotechnology

IEET Fellow and nanotech researcher Sascha Vonger has dropped a bomb on unethical practices in nanotechnology. According to him, rigorous research is being avoided in favor of flashy 'experiments' that are essentially non-scientific.

"Publish-or-perish culture turned science into an endeavor where deception is vital to get ahead, and nanotechnology ranks as one of the worst. A scientific field that has evolved this far into being a structure wherein deception is basically systemic cannot be trusted to self-regulate."


So what's the harm here? Many nanoethicists focus on existential risk, classic scenarios like grey goo replicators, nano-augmented super humans, and other far out ideas. Some more conservative thinkers worry about inequality, and whether nanotech will merely be a toy for those who are already rich and powerful, or if nanomaterials can be used to improve the quality of life in the third world. And of course, the safety of nanoparticles in the environment has yet to be conclusively established, and there is some evidence that carbon nanotubes can cause cancer.

Vonger proposes another risk, that nanotechnology is failing to be a rigorous science, and that this is unethical. In the classic CUDOS framework, nanotech does not have organized skepticism. Instead of a rigorous examination of an article, the community is relying on various signals (the authors are PhDs, the article is in a respected journal) to verify the integrity of the science. This is far from ideal, but realistically, an individual researcher can't check the fundamentals of every fact or article he uses. An organized, trusted community standard makes science more efficient.

The problem (if Vonger is correct), is that nanotechnology is refusing to accept internal criticism of its technical methods, and is therefore producing bad knowledge. Bad knowledge can be fatal for a discipline in several ways: heralded results can be publicly overturned in an embarrassing way (see Arsenic lifeforms), bad policy decisions can be made on the base of bad science (vaccines and autism), or finally, a series of individually harmless exaggerations can leave a field with no solid conceptual underpinnings. It is this last that nanotech is most vulnerable too, as field essentially born on great expectations, one gravely wounded in the Drexler-Smalley assembler wars, and under an immense burden of popular futurist pressure, its social structure isn't capable of dealing with criticism.

So what's the solution? There isn't an easy one, and it has be implemented across many disciplines. No one ever became famous for disproving a scientific theory, we're intrinsically biased to favor positive result, and ongoing culture wars over creationism and global warming (to give too examples), have made scientists wary of being proved wrong in public. Asking scientists to have the moral fortitude not to engage in 'cargo cult science', is one solution, but in the face of incentive which reward rapid publishing, will not work. Rather, science as a whole should favor more things like Journal of Negative Results, and recognize that research is an inherently uncertain process. Fewer papers, better papers, and perhaps even a tiered system between proven results and speculation, as opposed the informal system of credibility of knowledge we have now.


20101208

Why Scientists Aren't Republicans

Dan Sarewitz writes one of those articles about something that we all know, and that should prove terrifying.


A Pew Research Center Poll from July 2009 showed that only around 6 percent of U.S. scientists are Republicans; 55 percent are Democrats, 32 percent are independent, and the rest "don't know" their affiliation...
Could it be that disagreements over climate change are essentially political—and that science is just carried along for the ride? For 20 years, evidence about global warming has been directly and explicitly linked to a set of policy responses demanding international governance regimes, large-scale social engineering, and the redistribution of wealth. These are the sort of things that most Democrats welcome, and most Republicans hate. No wonder the Republicans are suspicious of the science.
Think about it: The results of climate science, delivered by scientists who are overwhelmingly Democratic, are used over a period of decades to advance a political agenda that happens to align precisely with the ideological preferences of Democrats. Coincidence—or causation?

Of course, Dan's a political thinker, an iconoclast, a bridge-builder. He goes on to advocate that scientist endeavor to show that they are not mere political shills to conservatives. Scientists have an immensely trusted position in American society (above 90%), and it'd be a shame to throw that away.

I prefer to take the opposite tack. What is it about Republican politics that is anti-science? Could it be that conservative positions on the environment, public health, economics, national security, and the origins of the universe are so obviously counter to reality that no-one who considers themselves both a Republican and an astute observer of a real, physical universe? The level of cognitive dissonance required to maintain both literacy with the frontiers of science, and adhere to conservative ideology is completely unsustainable.

Even more deeply, perhaps there's something implicitly antagonistic about science and conservatism. Science relies on a belief that truth is contingent on What Is, and What Can Be Observed. It does not matter who postulated a theory, as long as it matches reality. And if a theory fails, then it, and all contingent facts should be discarded. Conservatism, the worship of the past and a desire for stability, is antithetical to this project of continually tearing down and rebuilding reality.

Perhaps a better question is: Given that the world today is scientifically and technically constructed, that scientific truths are the 'best' truths, that technological artifacts define our lives, why should we listen to a group which is so fundamentally anti-science?

Not everything is relative. Sometimes there are right answers.