Social Media and the Revolution in Egypt

Revolution in Egypt is the story of the week. Over the past seven days, ordinary Egyptians have come together to oppose the 30 year rule of Hosni Mubarak. The protests are a wonder of civil disobedience, technological coordination, and bottom-up action. But what are the origins, chances, the potential outcomes, and options for US policy?

Revolution is like a fire. All the necessary ingredients can exist independently for years, only bursting into a self-sustaining conflagration under the proper conditions. Fires require fuel, oxygen, and heat to burn. Fire-fighting strategies rely on eliminating one side of the triangle. Wildfires are fought by separating the fire from more fuel, while conventional fires use water to reduce the amount of heat available. The social analogs to fuel, oxygen, and heat are grievances, the public sphere, and emotional intensity.

Grievances are the raw fuel of revolution. In the case of Egypt, three decades of misrule, oppression, torture, and the stifling of economic and political freedom have left the population with an endless stock of grievances. Except for a small class of Mubarak's cronies, few people have benefited from his rule, and a society that was once integrated has become divided into an urban poor, and an exurban elite. The demands of the protestors have become unified into one simple message: Mubarak out.

The public sphere is vital for any protest to organize and gather moment. This includes both the conventional public sphere of streets and squares, and the public sphere of information. From the first, revolutionaries have used the latest in information and communication technology. Printing presses during the American Revolution, tape decks in the Iranian Revolution. The CIA smuggled Xerox machines in the Soviet Union to spread samizdat, the individual distribution of banned books and magazines, while the protestors in Tienanmen Square used fax machines to communicate with the world. More recently of the modern ICTs, text messaging helped bring down the Philipino President Jose Estrada, and social media like Twitter and Facebook has been central in the current Egyptian revolution, the Jasmine Revolution in Tunisia, and protests in Iran and the Ukraine.

Conventional counter-protest tactics involve squeezing out the public sphere. Physically, riot police and tanks can occupy strategic areas, with curfews for normal traffic. In the modern era, totalitarian regimes have attacked cyberspace as well. Egypt shut down the internet entirely for a day, while Iran slowed external access to a crawl during its crisis. And China is notorious both for a totalitarian system of internet traffic monitoring and censoring, and also for shutting down telecoms service during riots in Tibet and Xinjiang. While shutting down the public sphere is effective in stopping protests, it is risky, requiring the use of mass violence which might further inflame the opposition, and if continued for too long, can cause economic damage.

But don't be confuse the utility of social networking with the necessity for it. As compared to other forms of ICT, social media plays into the third requirement for revolution; emotion. This is the non-quantifiable element which makes people dare to stand against power, to face batons and teargas and bullets in the name of liberty. Emotion is personal, internal, idiosyncratic, and social networking is about broadcasting your feelings, more than any specific information. A robust internet community can transmit and amplify anger and the demand for change. The sparks of rage can spread from city to city with incredibly rapidity. And once enough sparks have landed, and the crowds have gathered, the revolution becomes self sustaining. Optimism is counter by fear, the longstanding fear of the regime, and fear of future chaos and repercussions. In Egypt, the successful revolution in Tunisia provided the impetuous of hope to counter three decades of oppression. Mubarak has tried to instill fear in the population, by agent provocateurs and the threat of military force, but has so far proven unsuccessful. If the heat of the revolutionaries can outlast the resolve of the military, they will win.

The flame of revolution, kindled in Tunisia, is spreading through-out the Middle East. The authoritarian regimes are like a forest which has not burned for years, with piles of dead leaves and trees lying about. What happens next is impossible to predict, but sparks are jumping, governments falling, and a brave new world may be at hand.

Pt II: On what happens next, will follow tomorrow.


Innovation, but why?

Ancient peoples worshiped many gods, but modern civilization bows before a single principle: Innovation. As President Obama said in Tuesday's State of the Union address, “In America, innovation doesn't just change our lives. It is how we make our living.” He went on to use the word innovation ten more times, making it the major theme of his speech. Innovation is more than just a word, its influence can be seen in the ways that major institutions, such as business and the military, have re-organized themselves around a state of permanent innovation. In the following, I will examine two paths to this state, and its consequence for the scientific community and society at large.

Carlson traces the development of the corporate research and development lab. The first innovators were inventors, craftsmen who improved devices increment by increment. But as a systemic source of innovation, these small inventors typical of Industrial revolution were hobbled by a lack of capital, and the limitations of human knowledge. While tinkering with existing devices and principles was within the reach of many ambitious craftsmen, truly novel principles and the means to bring advanced technologies to market were out of reach.

Carlson traces the dawn of institutional innovation to the telegraph. As Western Union spread across the country, competing with local firms, railroads, financiers, and anti-trust lawyers, it became apparent that the difference between profit and extinct lay in harnessing the latest in electronics technology, usually by buying patents off of private inventors. Thomas Edison parlayed his success as an inventor into an immense private workshop, however General Electric and its chief scientist, Elihu Thompson, created the modern model of corporate R&D in 1900. Frustrated by the amount of coordination between scattered factories required to build an experimental car, he convinced the GE board to create a permanent lab conducting basic research.

At first, the purpose of the lab was purely defensive, to protect GE products from superior competitors. But as time passed, industrialists realized that new knowledge could be used offensively, to create new markets, to trade with competitors, and to improve public standing. Compared to the 'random genius' of inventors, management preferred scientific innovation because it seemed predictable and controllable. This basic pattern, with the added details of intra-industry collaboration and Federal support of risky technologies, has continued through the 21st century, although in real terms, large R&D labs have been responsible for surprisingly few breakthroughs, with much of the most creative work coming from smaller companies, a model best demonstrated in biotech and computers, where small start-ups with one piece of very valuable IP are purchased and developed by larger conglomerates.

A second side of institutional innovation is the military, which supports up to half of the basic research conducted in America. War and technology have long been closely intertwined, as brilliant explored by William McNeill in The Pursuit of Power. Perhaps the first noteworthy institutionalization of innovation was the British shipbuilding industry circa 1900, where an “Iron Triangle” of shipyards, admirals, and hawkish liberal politicians pushed steel to its limits with ever more powerful battleships. But it was not until WW1 that innovative warfare had its first chance to shine. Innovation was applied haphazardly, in the form of machine guns, poison gas, aircraft, tanks, submarines and anti-submarine warfare, but there was little coordination between scientists and soldiers. A new weapon would make an initial splash, but quickly add to the stalemate. The war was eventually decided by a German economic collapse.

Many of the scientific institution of WW1 were dismantled in the interwar years, but WW2 was above and beyond a war won by cutting edge science. Radar, operations research, airpower, and of course the atomic bomb were all products of Allied scientific knowledge, while jet fighters and rockets rolled off of Nazi lines at the close of the war. Federally supported labs, and defense companies who sold solely to the government proliferated, too many to name. With an obvious and immediate clash between the Allies and the Soviet Union at hand, neither side disarmed their scientific apparatus. Both sides sought to avoid a qualitative defeat, or worse, technological surprise, investing ever larger sums in military R&D, and leading to the domineering “military-industrial complex” of President Eisenhower's farewell address.

For scientists, these twin processes have been a mixed blessing. On the one hand, science has obtained a great deal of funding from industrial and military sources, orders of magnitude more than the pure 'pursuit of truth'. Yet, scientists have lost their autonomy, tied either to market forces or military imperatives. Biomedicine has improved healthcare, but also exponentially increased costs. The process of introducing a new drug is more akin to marketing than science or medicine. Through the military, “Science has known sin,” to paraphrase Oppenheimer's haunting phrase. Where for a period from about 1850 to 1945, the scientist could truly claim to represent a universal humanity, working towards the ends of destruction has permanently damaged scientific prestige and credibility. The values of science are subordinated towards petty, nationalist ends.

For society, pursuit of innovation has lead to the threat of man-made extinction through nuclear war. The process of action-reaction in the arms race brings us ever closer to the brink of annihilation. From the market side, the permanent churning of the basic constituents of society has created an immense dislocation. Skills and jobs can become obsolete in less than a decade. With new-found material wealth came a crass materialism. The objects around us change constantly, their principles of operation becoming ever more opaque. The deep sense of unease pervading American society might be reasonably traced to chronic future shock. Innovation is a god, but it has become Moloch, concerned solely with profit and military might.

So, to return to the State of the Union. I've read it several times, and I feel conflicted. It's a good speech, certainly, and I agree with many of the specific policies he outlines for a continued investment in innovation, yet there is a certain hollowness to it, a failure to grapple with the crux of why we innovate. The main drive to innovate is material, the jobs of the 21st century should be located in America, yet we don't know that innovation will bring back jobs, at best we know from the lessons of the past that a failure to innovate will mean the loss of more jobs. But the ultimate hollowness came at the end. President Obama made a deliberate callback to the space race, with the phrase “Sputnik moment,” but President Kennedy knew where we were going; the moon, in ten years.

Obama's answer to Kennedy, “I'm not sure how we'll reach that better place beyond the horizon, but I know we'll get there. I know we will.”

That's certainly true. We'll definitely make it to the future the old-fashioned way, by living it, one day at a time. But that's no guarantee that the future will be any place we want to live. Right now, all we have is a notion that America must be wealthier than China. As individuals, as a nation, and as a species, we must decide what is beyond that horizon, and we must build the institutions of governance to take us there.


Davos: Back to the Future

Parag Kanna has a fascinating thesis on what the Davos conference is. Davos, for the unfamiliar, "where each January the planet’s most influential heads of state, CEOs, mayors, religious leaders, NGO heads, university presidents, celebrities and artists flock for the annual meeting of the World Economic Forum (WEF), an event that over the past four decades has established itself as what “60 Minutes” last year dubbed “the most important meeting on Earth.”" But more than a meeting, Davos represents a new paradigm for diplomacy, taking place outside of the conventional structure of Nation-States dating back to the treaty of Westphalia. And it's not just a new paradigm, it's a better one.

Compared to the modern inter-state diplomatic system, Davos represents anti-diplomacy—and yet it actually reflects the true parameters of global diplomacy today better than the United Nations. The reason is that in our ever more complex diplomatic eco-system, relations among governments represent only one slice of the total picture. Beyond the traditional “public-public” relations of embassies and multilateraism, there are also the “public-private” partnerships sprouting across sectors and issues. Qatar’s natural gas fortunes hinge on its arrangement with Exxon, India’s ability to attract foreign investment is contingent on support from the business magnates who make up the Confederation of Indian Industry (CII), and the alliance of the Gates Foundation, pharmaceutical company Merck, and the government of Botswana saved the country’s population from being wiped out by AIDS, to name just a few of the now literally countless such arrangements flourishing today. The third and often neglected dimension of the new diplomacy is “private-private” interactions which circumvent the state altogether. Think of the Environmental Defense Fund dealing directly with Wal-Mart to cut the company’s overall emissions by 20 million metric tons and install solar panels at 30 new locations. The diplomats at Cancun could only dream of such concrete measures.

All three of these combinations of negotiating partners thrive at Davos and in all WEF activities, which range from mini-Davos-style regional conferences to year-round multi-stakeholder initiatives in public health, climate change, anti-corruption and other areas. The WEF does what no U.N. agency would ever do: allow “coalitions of the willing” to organically “grow and go”—incubating them but also quickly spinning them off into self-sustaining entities; but importantly also letting projects die that fail to attain sufficient support from participants. In this sense the WEF is both a space for convening but also a driver of new agendas.
Absolutely fascinating, the idea that an alliance of the super-wealthy, and professional activists could make a difference on the world more effectively than traditional governments. The end of the nation state is something that we've seen before, along with the rise of a new global elite, but Ranna puts an interesting spin on it, hearkening back to the Middle Ages when a variety of actors could influence global diplomacy, not just people bearing the seals of powerful nations. This is more than anarchy or oligarchy, this is the return of an ancient and resilient system of governance. We can only hope that they have some way of implementing wise decisions, and not just imposing choices for personal benefit from the top down.

John Robb is far less sanguine, and in typically vituperative fashion,
"[Davos] is a collection of elites generated by the antiquated, hierarchical systems of the 20th Century -- akin to a collection of corrupted inebriated noblemen from depleted, inbred bloodlines discussing the future of war, peace, and prosperity during the post fox-hunt feast."
Well, yes, and it's certainly not democratic nor accountable. But if Davos is where the action really is, then we need to be paying attention. And this new ruling class is a minimum, more egalitarian and less concerned with holding power forever than the ones that have come before.

Kanna does make one critical point, and I'll leave it in his words:
Global governance is not a thing, not a collection of formal institutions, not even a set of treaties. It is a process involving a far wider range of actors than have ever been party to global negotiations before. The sooner we look for new meta-scripts for regulating transnational activities and harnessing global resources to tackle local problems the better. Davos continues to be a good place to start.

Amen. Global governance starts with all of us.


What is a tempotron? I do not know.

Robert G├╝tig & Haim Sompolinsky, The tempotron: a neuron that learns spike timing–based decisions, Nature Neuroscience 9, 420 - 428 (2006) doi:10.1038/nn1643

Hey guys, Is anyone familiar with the ideas in this paper? It seems like its some kind of alternative neural network model that tries to explicitly incorporate the idea of spike trains, rather than just being an integrator composed with a nonlinear function at every neuron.

I'm going to try to read it tonight (possibly not in great depth), there is a talk I want to go to:
. If people are interested I can try to give some kind of summary.

Research ethics in nanotechnology

IEET Fellow and nanotech researcher Sascha Vonger has dropped a bomb on unethical practices in nanotechnology. According to him, rigorous research is being avoided in favor of flashy 'experiments' that are essentially non-scientific.

"Publish-or-perish culture turned science into an endeavor where deception is vital to get ahead, and nanotechnology ranks as one of the worst. A scientific field that has evolved this far into being a structure wherein deception is basically systemic cannot be trusted to self-regulate."

So what's the harm here? Many nanoethicists focus on existential risk, classic scenarios like grey goo replicators, nano-augmented super humans, and other far out ideas. Some more conservative thinkers worry about inequality, and whether nanotech will merely be a toy for those who are already rich and powerful, or if nanomaterials can be used to improve the quality of life in the third world. And of course, the safety of nanoparticles in the environment has yet to be conclusively established, and there is some evidence that carbon nanotubes can cause cancer.

Vonger proposes another risk, that nanotechnology is failing to be a rigorous science, and that this is unethical. In the classic CUDOS framework, nanotech does not have organized skepticism. Instead of a rigorous examination of an article, the community is relying on various signals (the authors are PhDs, the article is in a respected journal) to verify the integrity of the science. This is far from ideal, but realistically, an individual researcher can't check the fundamentals of every fact or article he uses. An organized, trusted community standard makes science more efficient.

The problem (if Vonger is correct), is that nanotechnology is refusing to accept internal criticism of its technical methods, and is therefore producing bad knowledge. Bad knowledge can be fatal for a discipline in several ways: heralded results can be publicly overturned in an embarrassing way (see Arsenic lifeforms), bad policy decisions can be made on the base of bad science (vaccines and autism), or finally, a series of individually harmless exaggerations can leave a field with no solid conceptual underpinnings. It is this last that nanotech is most vulnerable too, as field essentially born on great expectations, one gravely wounded in the Drexler-Smalley assembler wars, and under an immense burden of popular futurist pressure, its social structure isn't capable of dealing with criticism.

So what's the solution? There isn't an easy one, and it has be implemented across many disciplines. No one ever became famous for disproving a scientific theory, we're intrinsically biased to favor positive result, and ongoing culture wars over creationism and global warming (to give too examples), have made scientists wary of being proved wrong in public. Asking scientists to have the moral fortitude not to engage in 'cargo cult science', is one solution, but in the face of incentive which reward rapid publishing, will not work. Rather, science as a whole should favor more things like Journal of Negative Results, and recognize that research is an inherently uncertain process. Fewer papers, better papers, and perhaps even a tiered system between proven results and speculation, as opposed the informal system of credibility of knowledge we have now.


What's Taking so Long?

Anybody who's ever had to deal with a remodel or roadwork knows that construction is slow, slow, slow. Of course, it wasn't always this way. The Empire State Building was built in 14 months and was under budget, the Pentagon was built in 16 months, the Hoover Dam took five years. So what the hell happened since then?

That was the topic of a lecture I attended today by Edd Gibson, ASU construction expert. He made several valuable points, which I would like to extend and speculate on, if I may. Gibson noted that there are several common threads to successful projects: strong leadership, a sense of urgency and purpose, intensive planning, excellent communication, and innovation. Additionally, many of these great projects either failed to turn a profit for years, or required significant renovations afterwards.

The second part of Gibson's talk focused on the difference between successful projects and failures. This is more subjective than one might think, success is matter of perception and matching prior expectations. However, you can reliably detect the difference between projects headed for success and failure, using the Project Definition Rating Index, a scorecard that measures how well the team understand their objectives, local context, and ability to work together.

Leadership and teamwork are important, but they're also largely intangible qualities (unless of course they aren't). What I want to know are the social and technical factors that have driven this slowdown. Gibson alluded to regulation as force that hinders rapid completion. While this point is not really contentious, it's also not something that I've seen conclusively proven. Are there specific regulations (worker safety, public input, material and architectural standards, inspections for various subsystems) that delay projects? It's easy to point to regulation as a vague bogeyman, but regulation also ensures that buildings are safe to use, and embody the virtue of clear planning which Gibson correctly places so highly.

When I first heard about this lecture, I assume that the problem was technology. Simply put, buildings are vastly more complex than they were 60 years ago. If the Empire State building were built today, it would be LEED certified, wi-fi enabled, ergonomic, subject to all sorts of review, etc. On the other hand, CAD makes design much easier than paper drafting, and logistics systems are much more efficient. Communication technology is better, but according to Gibson, there's no substitute for face to face, an opinion that I share. It's too easy to form a sham consensus in cyberspace.

What I fear is that the very mechanisms of public participation and input that I generally champion have in fact lead to the perennial inability to rapidly complete projects. Its too easy for last minute legal challenges to derail a major project. Technology might also contribute, the very protean ease of technology to improvise might hinder proper planning (this is certainly the case with my weekends). We might just have to accept that the great works of the mid-20th century were an industrial anomaly. But hopefully, their 21st century equivalents will be more durable.


The Rise and Decline of Military Human Enhancement

In the past decade, the U.S. military’s interest in human enhancement technologies has waxed and waned. An initial surge of interest, fueled by a desire to create the “Future Force Warrior” has given way, over time, to the more mundane challenges of meeting the needs of soldiers in Afghanistan and Iraq. We would be fooling ourselves, however, if we believed that the U.S. military had abandoned efforts to upgrade the soldier’s body and mind to match the pace of modern warfare. We are in, at best, a lull in military investments in human enhancement research. That is why now is good time to start asking hard questions about how—and indeed if—we should proceed along this course.

In 2002, Dr Joseph Bielitzki, chair of DARPA’s Defense Sciences Office, announced a grand program to improve soldiers, with the slogan “Be all that you can be, and a lot more.” His targets: sleep, fatigue, pain, and blood loss. Other projects studied psychological stress, memory, and learning. The next year, the Army launched the multibillion dollar Future Combat System to transform the military into a fast and flexible force of networked sensors, combat vehicles, and wired soldiers. The words on everybody’s lips were “human enhancement,” the use of science and technology to upgrade the human body and mind. Advances in the life sciences would make soldiers more than human, while computers, digital sensors, and smart communication systems would replace the rigid military hierarchy. According to military futurists, the then-new War on Terror required a new type of soldier, independent, fast and more lethal than ever before.

Read the rest at Science Progress

I got published on more official blogs a few weeks ago, but in my haste forgot to add a link to it on We Alone. So here it is.


Augmenting Humanity @HeatSync Labs

Let's start this year off with a bang! I just got back from HeatSync Labs, where the local hackers are taking their eyes off of 3D printing, near-space missions, tesla coils, and cylon Roombas and working on something a little closer to home: themselves.

Okay, that requires a little explanation. HeatSync is a Phoenix area hackerspace, a place for technically inclined people to come together, pool resources, and work on interesting projects. Hackerspaces started in the nerdvanas of Silicon Valley and Route 128, but the movement is spreading across the country, and expanding from electronics to biotech. With the democratization of technical equipment, almost anybody can be a scientist. The hackerspace movement scales up the joy of just messing around with blinkenlights to an adult level, and it might just serve as the incubutator for the next wave of innovation.

Augmenting Humanity @HeatSync is a small group of hackers with an interest in transhumanism, and with using their DIY skills to improve themselves. They're well on their way. Jacob has an experimental magnetic sense, and wants more radical alterations. Harry is a recreational neuroscientist. He already has a 14 channel EEG he freed from an Emotiv controller, and his next step is to make an Arduino board DC neural stimulator, as well as his own version of ze goggles. With all this, he's well on his way to doing some real interesting science. Jeremy is into quantified self, and wants to use smartphones and wireless sensors to make data collection trivial. Bryan is blind, and is working with Apple to improve the accessibility of iDevices, while trying to find hacks to make his life easier. The current project is a liquid level sensor. As a father of three, Bryan spends a lot of time filling bottles, and a device which beeps at the proper level would be good for everybody.

These guys are definitely aware of the social and political aspects of what they're doing. They view themselves as citizen-scientists, in the vein of the old Royal Society, and they want to both improve themselves, and generate useful knowledge that the standard scientific research process won't touch, either because it won't be funded, or violates medical ethics (note: ethical medical research must treat a disease, so by definition, enhancement is unethical. The current work around has been 'medicalization', creating a disorder for people who want to be enhanced. Many people, myself included, think this is a major problem.) They also are very forward about getting their work out there, and connecting with like minded hackers across the world. In the absence of formal journals, all of this is being organized through social media, blogs, wikis, and video chat. We are lucky enough to live in an era where information can be shared easily, and advanced technology is cheap. In the next few months and years, I hope to spend a fair amount of time with Augmented Humanity, develop some projects, and get them out there. But for now, rest easy knowing that these people have the future well in hand.

((I'll also be taking suggestings on projects/topics to talk about with these guys))


( taken from an e-mail wherein I try to explain briefly "information" to an English major. )

Information, in math and science, is formalized as a collection of answers to yes-no questions. Usually we talk about binary strings of bits "01011011101", and say that "1" is a "yes" and "0" is "no". Information is not meaning. "01011011101" can be a series of random coin tosses, or encode the password to your computer. How information is interpreted gives it meaning.[1]

Information, as a series of yes-no questions, can be transformed to do useful things. For example, controlling a robot or displaying pixels on your computer monitor. Translating a series of yes-no questions (bits) into the action of a machine can be done mechanically and automatically. This transformation could be considered the act of interpreting, or understanding the meaning of information.[1]

While its clear that human communication involves conveying information, we don't understand natural language as well as we do computer communication. Studies suggest that English transmits about 0.5 bits per letter[1]. In other words, if I told you I was thinking of an English phrase 100 letters long, you could probably guess what it was in 50 questions, assuming it was a sensible English phrase. It is much harder to say whether or not a human understands the meaning of information. There has been a lot of recent research in "information" within the brain.

The idea of information is pretty trippy. Take DNA for example :

DNA encodes information called a gene. The gene is the information, not the DNA encoding it. Since we can sequence DNA, we can store the gene inside a computer or on some printouts or carved in a collection of scratches in stone. The cellular machinery that reads DNA and produces life is the only system that can completely understand the meaning of DNA, so we still need to take that gene and sequence it into DNA and insert it into a living cell in order to really understand what the gene does ( the same gene may do different things to different species, or different things within different cells of the same organism ). This has been a major hurdle following the human genome project : a mass of information without meaning. [2]

Information and Entropy :

Since information is not inherently meaningful, there is a lot of meaningless information floating around out there. Its no accident that the units for measuring entropy are identical to the units for measuring information: systems with maximum entropy store the greatest possible amount of information. Some methods of compressing computer files are called "entropy encoding", and take long files with low information per byte and transform them into shorter files with a higher information per byte. So, the statement that "entropy increases in the universe", is actually the same thing as "information increases in the universe". The increase in entropy in the universe is related to the fact that the present contains information about what happened in the past, and there is an ever increasing amount of past. [2]

Sources :
[2] Some conversations with Keegan, possibly, or I just made it up.


Humanity's Quest for Immortality

One of humanity's oldest impulses has been to conquer death. From Egyptian mummification to the Christian heaven, the idea that some way, through material or spiritual works, we can transcend the mortal coil and live forever is highly seductive. These day, since and technology promise longer lives and immortality, ranging from the modest and sensible precautions of eating correctly and exercising, to the radical biomedical revolutions professed by Aubry de Grey and Ray Kurzweil.

Between antiquity and post-modernity lies a broad area of research into immortality that hasn't yet been explored. John Grey has come out with an interesting new book examining the quest for immortality in the Victorian and Soviet eras, revealing a fascinating secret history of spiritualists, eugenicists, Soviet utopians, and the science fiction writer HG Wells.

Darwinism is impossible to reconcile with the notion that humans have any special exemption from mortality. In Darwin's scheme of things species are not fixed or everlasting; there is no impassable barrier between human minds and those of other animals. How then could only humans go on to a life beyond the grave? If all life were extinguished on Earth, possibly as a result of climate change caused by humans, would they look down from the after-world, alone, on the wasteland they had left beneath? Surely, in terms of the prospect of immortality, all sentient beings stand or fall together. Then again, how could anyone imagine all the legions of the dead – not only the human generations that have come and gone but the countless animal species that are now extinct – living on in the ether, forever?

Science could not give these seekers what they were looking for. Yet at the same time that sections of the English elite were looking for a scientific version of immortality, a similar quest was under way in Russia among the "God-builders" – a section of the Bolshevik intelligentsia that believed science could someday, perhaps quite soon, be used to defeat death. The God-builders included Maxim Gorky, Anatoly Lunacharsky, a former Theosophist who was appointed commissar of enlightenment in the new Soviet regime, and the trade minister Leonid Krasin, an engineer and disciple of the Russian mystic Nikolai Fedorov, who believed that the dead could be technologically resurrected. Krasin was a key figure in the decisions that were made about how Lenin's remains would be preserved.

Weakened in Britain, belief in gradual progress had ceased to exist in Russia. An entire civilisation had collapsed, and the incremental improvement cherished by liberals was simply not possible. The idea of progress was not abandoned, however. Instead it was radicalised, as Russia's new rulers were confirmed in their conviction that humanity advances through a succession of catastrophes. Not only society but human nature had to be destroyed, and only then rebuilt. Humans did not go on to a new life on the other side. There was no other side. When humans died they returned to dust, just like other animals. But once the power of science was fully harnessed, the God-builders believed, death could be overcome by force. Eventually all of humankind could look forward to scientifically guaranteed immortality, but the process of technological resurrection would begin with the most valuable of human beings – Lenin.

Read the full excerpt at The Guardian.

The protagonists of John Grey's book are the immediate sources of the current technological quest for immortality. Though the scientific barriers we face today are different, the philosophical quandaries and contradictions of physical immortality are similar. What will life mean without the end of death? How can society evolve when the powerful never relinquish their power? Is living forever truly the highest goal that we can devote ourselves to? There a difference between individual immortality, and the continued survival of humanity as a whole. For our species and our culture, death is tragic, but ultimately necessary and even unavoidable.

Should we seek immortality, or are our scientific resources best used elsewhere? John Grey's history of the strange quest for immortality may help us decide.


Rep. Giffords and the Lone Wolf Terrorist

Today has been a dark and tragic day in Arizona. At least six people are dead, a Congresswoman is in critical condition after being shot through the head, and everybody has been terrified by a rare act of violence. Our hearts and thoughts go out to Representative Giffords and her family.

But meanwhile, the blogosphere is full of theories and speculation. Some people jumped immediately to it being a Tea Party assassin, others said it was a random maniac, or an illegal immigrant, or even a Left Wing false flag operation. In the intervening hours, more details have emerged about the shooter, and given this recent post about lone wolf terrorism, I'd give it my shot.

The shooter has been identified as Jared Loughner, a 22 year old white male from Tuscon, AZ. We know only a little about his background; he's in community college, and he was refused military service. Like other lone wolf terrorists, he has left a manifesto, in the form of a Youtube Channel.

Loughner's videos are brief text slideshows accompanied by ambient guitar music (I am reminded of God is an Astronaut.) The picture that emerges is one of a disturbed young man, a waking dreamer in a society of illiterates. Loughner is obsessed with mind control, and the power of grammar, currency, and religion over the masses. Here's what he has to say about terrorism and the Constitution.


If I define terrorist then a terrorist is a person who employs terror or terrorism, especially as a political weapon.

I define terrorist.

Thus, a terrorist is a person who employs terror or terrorism, especially as a political weapon.

If you call me a terrorist then the argument to call me terrorist is Ad hominem.

You call me a terrorist

Thus, the argument to call me a terrorist is Ad hominem.

You don't have to accept the federalist laws.

Nonetheless, read the United States of America's Constitution to apprehend all of the current treasonous laws.

You're literate, listener?

In conclusion, reading the second United States Constitution, I can't trust the current government because of the ratifications: The government is implying mind control and brainwash on the people by controlling grammar.

The only other video on the page is a seven minute video where a man in a black robe ritualistically burns the American flag to the metal anthem “Let the Bodies Hit the Floor.” His list of favorite books are “Animal Farm, Brave New World, The Wizard Of OZ, Aesop Fables, The Odyssey, Alice Adventures Into Wonderland, Fahrenheit 451, Peter Pan, To Kill A Mockingbird, We The Living, Phantom Toll Booth, One Flew Over The Cuckoo's Nest, Pulp,Through The Looking Glass, The Communist Manifesto, Siddhartha, The Old Man And The Sea, Gulliver's Travels, Mein Kampf, The Republic, and Meno. “

I'm not a psychiatrist, but Loughner 's reasoning is stereotypically schizophrenic and paranoid. He denies external reality, rails against the structure of society, and is obsessed with some Tea Party favorites, like the gold standard and the 10th Amendment. His brief stint in the military matches the lone wolf profile. He is quite clearly disturbed and dangerous, but is he a terrorist, or merely a confused young man?

The distinguishing point between the terrorist and the typical American school or workplace shooter is the political nature of the terrorists action. Shooters (and we've seen a lot) target the entity that they believe has wrong them the most, a familiar location like a school or workplace. These are desperate people who feel the need to make one final statement.

Loughner railed against Pima Community College, describing it as a scam, a torture facility full of mind control victim. He alludes to some incident where he was prevented from speaking. Why then did he target Representative Giffords, and not his school?

We can't fully know, this act stemmed from a place of deep nihilism. Loughner was in many ways a bomb waiting to explode, but I doubt the inflammatory climate he was in helped. Giffords was most likely a target of opportunity, the most important one he could reach.

Giffords was on Sarah Palin's “TakeBackThe20.com”, a website which marked her district with a target symbol, and demanded that voters remove them (and has since been scrubbed from the internet). Sarah Palin said about healthcare, “"The crossfire is intense, so penetrate through enemy territory by bombing through the press, and use your strong weapons -- your Big Guns -- to drive to the hole. Shoot with accuracy; aim high and remember it takes blood, sweat and tears to win." We of course remember Sharron Angle's “Second Amendment Remedies.”

At this point, a clear line cannot be draw between specific statement by Palin, Beck, et al and the shooting, but one thing is obvious. Jared Loughner choose to make his ultimate statement on a political target, not a personal one. The heated rhetoric and weapons related metaphors increase the risk that some number of disturbed people in America will decide that their life is ruled by political oppression, not personal failure. The right wing bears responsibility for this incident, and I hope it prompts their pundits to step back from the edge of madness they currently walk.

EDIT: When I wrote this, much of the information about Loughner was speculative. I've corrected his name, and military status. In an early draft, I also lay more of the blame on the right wing. I was wrong, acted in haste, and went against my own argument. I've left the original conclusion up as reminded to myself. Apologies to those offended.


When I wake up in the morning and ask "how does physics support this waking feeling" I wave my hands and say "something about bits and information, maybe, keep researching". This is part of an elaborate system of self deception designed to keep me from dropping out of graduate school.


Collect Intelligence

As part of my day job, I'm setting up a futurism blog called The Prevail Project (it's in early alpha, so don't look). I have a backlog of interesting articles to work through, and the cool ones will be cross-posted to We Alone.

No matter how smart you are, you can't fix the problems of the world by yourself. Teamwork is the order of the day, and a new study by MIT suggests that the skills of effective teams might be more universal than previously thought. Researchers put small teams of two to five people through a battery different assignments, while monitoring how they interacted.

“We did not know if groups would show a general cognitive ability across tasks,” said Thomas W. Malone, the Patrick J. McGovern Professor of Management at the MIT Sloan School of Management, one of the authors of the paper. “But we found that there is a general effectiveness, a group collective intelligence, which predicts a group’s performance in a lot of situations.”

That effectiveness, the researchers believe, stems from how well the group works together. Groups whose members had higher levels of “social sensitivity” — the willingness of the group to let all its members take turns and apply their skills to a given challenge — were more collectively intelligent. “Social sensitivity has to do with how well group members perceive each other’s emotions,” said Malone. “In groups where one person dominated, the group was less intelligent than in groups where the conversational turns were more evenly distributed.”

The average intelligence of the individuals in the group had no effect on their performance. And interestingly, teams with more women were more effective than male dominated teams.

What's the take away? Now that scientists have the tools to study how groups work in detail, we can make smarter, more effective teams. The skills of social sensitivity, listening and taking turns, could be taught. Depending on how much technology a group was willing to accept, the methods of the MIT researchers could tell groups when they're being "collectively stupid" and cutting members out of the loop.

Small teams are particularly important for Prevail because teams can bring many different skills and perspectives to the table, and can adapt to the specifics of a new problem. While it takes years to master a body of knowledge, teams that can effectively integrate new members who are already experts can respond rapidly to emerging threats. Knowing how small groups work, and how they can be made to work better, is one way to Prevail.



A friend has written up a nice post surveying communication prosthetics and information theory. Check it out.

I have been told that devices for the severely disabled, like eye-trackers, are too unreliable and break down too often to be practical, especially when caregivers aren't computer literate. Problems like this seem to be simple a matter of engineering. I imagine that this disabled population could benefit greatly from a usable open-source implementation of a variety of augmentative communication devices, since many individuals cannot afford expensive custom built computer systems.


Lone Wolf Terrorism

Every once in a while, I stumble across a paper so weird that it just has to be read. Several weeks ago, Bruce Schneir's security blog posted a Master's Thesis from the Naval Postgraduate College. I promised a review of it, so without further ado, he's my take on Major Norman Springer's thesis, “Patterns of Radicalization: Identifying the Markers and Warning Signs of the Lone Wolf Terrorists in our Midst.

Springer conducts a biographical analysis of the three most famous American lone wolf terrorists, Tim McVeigh, Ted Kaczynski and Eric Rudolph to establish similar patterns of psychology and development along a common timeline. By identifying these patterns, law enforcement officials might be able to stop lone wolf terrorists before they strike.

The paper is fairly long, but a fast read. I'm not going to discuss the histories of these terrorists (read Springer's paper, or Wikipedia), but to quickly summarize, McVeigh and Rudolph were anti-government right-wing extremists, while Kaczynski was a radical luddite striking out against the entire industrial system. But despite their divergent views and backgrounds, there is a clearly illuminated common path, and Springer draws together diverse documentary sources to illuminate the commonalities.

All three lone wolves were profoundly disconnected from society, starting at a young age with parents who were either abusive, distant, or flaky, a situation which emphasized self-sufficiency. The subjects had immense trouble forming relationships with other people, especially women. They were pointed on the path to radicalization by some malefactor, either racist coworkers in the case of McVeigh, a racist foster parent for Rudolph, or an extended and very traumatic psychological experiment/interrogation conducted on Kaczynski when he was an undergraduate at Harvard. Each subject made one final attempt to join themselves to a large hierarchy, such as the military or academia, but was ultimately rebuked. After this failure, the subjects turned inwards, their self-sufficiency becoming isolationism and paranoia. They blamed their personal failing on some grand, evil power in the universe, a power which must be fought by good men. At this point, their course turned towards violence, and once they had killed, they were committed to a path of terror. Each man demanded public recognition for his action, publishing before and after capture as part of an “organization” in the vain hope that the people would see the righteousness of their cause and rise up. They have so far been universally disappointed.

I enjoyed Springer's work. His conclusions are well-grounded, and the historical overview useful synthesis. But what struck me was that all of his subjects were pre-network individuals. The experience that was most profound was their failure to join a group. But these days, the nature of community has shifted. The internet is full of random watering holes and bulletin boards, places for the like minded to exchange ideas and feel less alone. Would virtual communities provide enough of a sense of belonging to defuse these most radical terrorists, or are they echo chambers that would amplify their bad ideas? Of course, people who speak out in the open on the internet can be monitored, and their energies funneled into harmless, (if massively expensive) traps. But the new lone wolf will not subscribe to the patterns that Springer has laid out. He will be likely closer to Nidal Malik Hassan (the Fort Hood shooter). What does the internet add to Springer's formulation of the lone wolf?