20110423

Trust the Man in the White Lab Coat, He is Your Friend: or, Restoring Public Faith in Science

Science in the 20th century produced miracles. Physicists discovered the fundamental building blocks of the universe, chemists invented almost every modern object with plastics, biologists cracked the genetic code, and engineers literally flew to the moon. But at some point, the relationship between science and society went off the rails. Maybe it was a variety of food scares in the European Union, or perhaps the mandatory climate change denial for American conservatives. But whatever the cause, scientists lost the public trust. Those of us who account ourselves policy realists believe that accurate science is vital to proper policy formation. How then, can the public trust in science be restored?

In “See-Through-Science”, James Wilson and Rebecca Willis of Demos argue that public engagement with science has to move upstream. Rather than scientific knowledge flowing from the technical elite to an accepting public, scientists and ordinary people should be talking about the values, visions, and vested interests of emerging fields of research as early as possible. The goal is to create better, more socially robust, science that doesn’t clash with public values at a later date, such as occurred with embryonic stem cell research. The idea is to re-engage people with the scientific ideas that will drive the future.

“Taking European Knowledge Society Serious” is a similar effort by a star-studded EU academic panel to diagnose how European science can be both socially responsive and a driver of innovation in the 21st century. Their recommendations are far reaching, but center around the idea that ‘risk assessment’ has to incorporate broader values, and that political elites should be careful that they don’t predetermine the framings of scientific controversy.

Personally, I’m doubtful of the ability of citizens’ juries, value mapping, or the other kinds of participatory efforts to positively alter the course of science, or the relationship between science and society. The day to day activities of science are fairly dull for those who are not already invested in them. Public participation would pick from the same select pool as criminal juries; the retired, the unemployed, and the flakey, and the effects of participation would not extend beyond their immediate social network. Science is driven by foremost, the immutable facts of nature, and their discovery and use. Secondly, it is driven by priority of novel results and the internal advancement of scientists within the community, and finally, it is driven by money, and the decisions by which grant panels, venture capitalists, and corporate executive allocate money. According to liberal political and economic theory, democracy and the free market already serve as adequate proxies for ‘public participation’ in deciding the direction of research.

But the weaknesses in these European STS policy pieces go deeper than an inability to alter the course of research. Rather, they don’t even attempt to figure out why the public distrusts science. This is a core issue, because without diagnosing the disease, there can be no purposeful attempt at a cure. And finding a cure is important, because the opposite of science is not apathy, but rather a particularly subversive and dangerous form of magical thinking.

People distrust science because science is inherently fallible. Every reversion of a theory, every recall of a new drug or product, every breakdown in a complex socio-technical system demonstrates that science is weaker than the magic thinking associated with religion, dark green ecocentrism, climate change denial, and neo-classical economics. The incomplete, esoteric, and contradictory nature of these beliefs systems is in fact their strength, since any failure in their magic can be explained away. Science, without these ambiguities, must suffer until a paradigm shift.

A second aspect is the persistent disintegration of trust in our society. During the Cold War, political leaders (in alliance with scientists) were able to use the threat on immanent nuclear annihilation to create obedience. It is no surprise that the decline in the credibility of science happened at the same time as defense intellectuals were rendered irrelevant by the sudden collapse of the Soviet Union. People began to look for new theories that matched their own personal beliefs, that weren’t as hard to understand and didn’t change as rapidly as science. A few canny politicos realized that by destroying civic trust and the belief in an empirical, historical past, they could craft the past anew each election cycle, avoiding all responsibility for their mistakes. And so far, we’ve been rich enough and robust enough not to suffer any existential disasters from thinking magically, despite the purposeless wars in the Iraq and Afghanistan, the flooding of New Orleans, the financial collapse, the BP oil spill, the Fukushima nuclear disaster, etc etc.

The problem with directly attacking false beliefs and magical thinking is this tends to alienate the audience you are trying to court, and may even entrench their status as an oppressed minority. However, changing minds is very, very hard, and the first priority must be stopping the spread of the infection. We can’t censor, but we can ridicule, and demand to see the credentials of these peddlers of false beliefs. The ideals of equality and neutrality espoused by the mainstream media are fictions which have stopped being strictly useful. Bullshit must be publically exposed as such. Perhaps we need a new journalism award, the Golden Shovel, for the best demolition of bullshit and lies.

At the same time, we need to recast public education towards a realistic understanding of the limits of science, technology, and state power. People have impossible expectations for science, they demand that it solve ill-formed problems, such as those dealing with the regulation of potentially toxic chemicals, in the absence of useful models. Or they want their drugs safe, effective, and now. Or they believe the Federal government has the power to plug a hole thousands of feet beneath the sea. At the same times as people learn about the limits of science, they should also be taught about the line between falsifiable science, and unfalsifiable magical thinking. Of course, this will not be easy, especially at a high school level. I am barely coming to grips with these issues, and I’ve spent several years studying them. But more important than any factual knowledge, is the ability to reason, to think critically, and to distinguish valid arguments from invalid one. Until every member of the public can articulate their values, and the supporting evidence for them, efforts to input public values into science will be useless at best.


20110410

∞ zoom


I spent the weekend cooking up an L-system renderer in Processing that zooms into the fractal indefinitely. L-systems are defined by recursive re-write rules, so to zoom simply apply the re-write rule to visible edges, zoom in, and discard off-screen edges. The actual depth of re-write is limited and elements are selected for re-write pseudo-randomly, which creates additional fractal clustering effects.

The project, including the full-screen builds for osx/windows/linux, can be downloaded from Sourceforge. The applet version is available here.

Staring at this animation causes the motion perception fatigue(adaptation) effect. After looking at an expanding field for a long time, your motion detecting neurons give up and stop firing. When you look away at something that is not moving, you see opposite, inward motion. I find that effect can be exaggerated by sleep deprivation and stimulants like coffee.

Doubly interesting is that, after staring at this animation almost non-stop for 24 hours, the still screen-shot above to me appears to contract, even though my motion perception is normal for other objects. This would cancel out any perceived expansion from the actual animation. I wonder if this is a learned prior on the behavior of the "zoom" applet: my brain expects the patterns to expand, and adjusts motion detection to match.

Triply interesting is that, after staring at the still screenshot, which I perceive as contracting, I get an opposite motion fatigue effect : for a split second after looking away, I see expanding motion. This might indicate that the illusory expansion shares an adaptation mechanism with real motion.

update : It appears that the motion blur trails are related to the long-lasting illusory aftereffect. Additionally, the illusory motion is only perceived during saccade, which shares some similarities with other optical illusions. Does anyone else experience illusory motion with the above still ? Could this be related to some mechanisms of adjust motion perception during saccades, which are fast and themselves cause motion-blur on the retina ?

There might be something slightly off with my visual system, but complex and long lasting adaptation effects are well documented for other visual stimuli.


20110404

Risky Business

Twelve deep thinkers over at The Edge have a series on risk after the Fukushima disaster. I won’t try and reproduce the complexity and subtlety of their arguments, but risk and risk management are at the heart of what the Prevail Project is about. How can we think about risk in a domain of technological uncertainty? What does risk actually mean?

Risk is modern concept, compared with the ancient and universal idea of danger. Dangers are immediate and apparent; a fire, a cougar, angering the spirits. Risk is danger that has been tamed by statistics; this heater has a 0.001% of igniting over the course of its lifespan, there are cougars in the woods, and so on. Risk owes its origins to the insurance industry, and Lloyd’s of London, which was founded to protect merchant-bankers against the dangers of sea travel. While any individual ship might sink, on average, most ships would complete their voyages, so investors could band together to prevent a run of bad luck from impoverishing any single member of the group.

This kind of risk is simple and easy to understand. It is what mathematicians refer to as linear: a change in the inputs, like the season, correlates directly to an outcome, like the number of storms, and the number of ship sunk. The problem is that this idea of risk has been expanded to cover complex systems, with many inter-related parts. As complexity goes up, comprehensibility goes down, and risks expand in complicated ways. Modern society is “tightly coupled”, a concept developed by Charles Perrow in his book Normal Accidents. Parts are linked in non-obvious ways by technology, ecology, culture, and economics, and failure in a single component can rapidly propagate through the system.

The 2007 financial crisis is a perfect example of a normal accident caused by tight coupling. Financiers realized that while housing prices fluctuate, they are usually stable on a national basis, and so developed collateralized debt obligations based on ‘slices’ of the housing market nation-wide, which were rated at highly secure investments. When the housing bubble collapsed, an event not accounted for in their models, trillions of dollars in investments lost all certain value. Paralysis spread throughout the financial system, lead to a major recession. While this potted history is certainly incomplete, normal accidents are the defining feature of the times. The 2009 Gulf of Mexico oil spill, and the Fukushima meltdown are both due to events which were not accounted for in statistical models of risk, but which in hindsight appear inevitable over a long enough timescale.

Statistics and scientific risk assessment are based on history, but the world is changing, and the past is no longer a valid guide to the future. Thousand year weather events are more and more frequent, while new technologies are reshaping the fundamental infrastructure of society. When the probabilities and the consequences of an accident are entirely unknowable, how can we manage risk?

One option is the precautionary principle, which says that until a product or process is proven entirely safe, it is assumed to be dangerous. The problem with the precautionary principle is that it is different in degree, not in kind. It demands extremely high probabilities of safety, but doesn’t solve the problem of tight coupling. Another solution is basing everything off of the worst possible case: what happens if the reactor explodes, or money turns into fairy gold. System which can fail in dangerous, expensive ways, are inherently unsafe and should be chosen in favor that have more local consequences. This solution has the twin problem of demarcating realistic vs fantastic risk, after all, Rube Goldberg scenarios starting with a misplaced banana peel might leads to the end of the world. The second problem is that this discounts ordinary, everyday risk. Driving is far more dangerous per passenger-mile than air travel, yet people are much more afraid of plane crashes. A framework based on worst-case scenarios leads to paralysis, because everything might have bad consequences, and prevents us from rationally analyzing risk. The end state of the worst-case scenarios is being afraid to leave the house because you might get hit by a bus.

So the ancient idea of danger no longer holds, because we can’t know that is dangerous anymore, and mere fear of the unknown cannot stand against the impulse to understand and transform through science and technology. Risk has been domesticated in error; a society built on risk continually gambles with its future.

The solution involved decoupling, building cut-outs into complex systems so they can be stopped in an orderly manner when they begin to fail, and decentralizing powerful, large-scale infrastructure. Every object in the world is bound together in the technological and economic network that we call the global economy. We cannot assume that it will function the way it has forever, rather we should trace objects back to their origins, locate the single points of failure, the places where large numbers of threads come together, and develop alternative paths around those failure points. Normal accidents are a fact of life, but there is no reason why they have to bring down people thousands of miles away from their point of origin