20101231

What natural language is most similar to mathematical notation ?
What natural language is most similar to programming languages ?
I hear legal English is pretty close.


20101230

Mind:melt

*I should have set a lower framerate for smoother playback


20101228

Year of the Leak

2010 is the year of the leak. From the Afghanistan and Iraq war diaries, to Cablegate, and the upcoming revelations about Bank of America, the website Wikileaks has driven the media debate, challenged American foreign policy, and proposed a new way to control governments and corporations. The hyperbole surrounding Wikileaks has been tremendous. Senators have demand the head of Julian Assange on the Senate floor, while Assange has compared himself to Martin Luther King and Gandhi. The allegations of rape, treason, and espionage make for a dramatic tale, but there are important questions to be asked. Is the absolute transparency of Wikileaks truly good for democratic governance? What balance of openness and privacy should we strive for in society? And what comes after Wikileaks?


Public transparency is one of the cornerstones of democracy. Citizens must know what the government does in their name, must know that public figures are capable and honest, so that if they are not they can be replaced. The traditional organ of transparency are the press. In the worlds of Thomas Jefferson, “Our liberty cannot be guarded but by the freedom of the press, nor that be limited without danger of losing it.” The press exists to monitor politicians, and inform the public, but it is also dependent on public officials for leads and quotes, and beholden to commercial advertising. The entertaining, familiar press is a fixture, but is less credible than ever before. There is a widespread sense among Americans that the news is not telling them the truth, that stories are slanted and incomplete.


Into this gap steps Wikileaks, with a radically different view of how information and the public should work. Wikileaks demands that all information be publicly accessible, that governments and corporations should be completely transparent, and that those who do not abide by these rules will be punished. But despite similar techniques, do not confused Wikileaks with the press; Wikileaks is a political organization with revolutionary fundamentally antithetical to the structure of contemporary society.


Julian Assange is a deep, if unconventional political theorist, and at the heart of Wikileaks is his idea of the authoritarian conspiracy. Assange believes that the world is ruled by conspiracies, not in the “9-11 was an inside job, aliens exist, the Queen of England is a reptoid” way, but in a much more formal, mathematical idea that there are networks of power and influence which exert a great deal of control on events, to the detriment of people in general. This authoritarian conspiracies are the real structure of government, senior civil servants, politicians, industrialists and tycoons, and they collaborate to run the world system.


Assange wants to kill this conspiracy, and Wikileaks is his tools. Conspiracies and networks are hard to eliminate, there is no central commander to decapitate, new members rise up from the ranks. The continued battle against Al Queda shows how difficult it is to destroy a conspiracy. Instead of waging war on the powerful, Assange has targeted its infrastructure, the network of trust that allows the global authoritarian conspiracy to coordinate its actions. There is no specific information available to Wikileaks, rather its existence and ability to expose and embarrass authoritarian conspiracies forces them to spend time and energy on internal security, reduces the ability of conspirators to trust one another, and ultimately drives the conspiracy into paralysis. An authoritarian conspiracy that cannot communicate, cannot think, cannot act, and will ultimately be destroyed.


Is the American government an authoritarian conspiracy, as Assange describes them? From certain viewpoints, yes. The American government often acts in a secret, and has lied, deceived, and killed in the name of small, wealthy interests before. It represents only 307 million of the nearly 7 billion people on this planet. But on the other hand, domestic funding is publicly accountable, and the U.S. often acts the 'global policeman' to stop rogue states and weapons of mass destruction.


As Jaron Lanier lays out in an excellent essay, the internet is at its basis binary, on or off, totally open or completely closed. This feature is built into the core of the hardware that runs the internet, and is how Wikileaks is so successful. A poorly secured government network (SIPRNet) was penetrated by one of the three million users who had access. Once the alleged leaker, Private Manning, had passed the cables to Wikileaks, they were everywhere, and impossible to put back in the bag.


Data on the internet exists in only one of these two states, cursory openness or total secrecy, but real life is full of shades of gray. We tell things to our family we would not tell to our friends, which we would not tell to colleagues, which we would not tell a stranger on the bus and so on. Ultimately, without the privacy of out thoughts, the self as we know it would not exist. What Lanier fears is that Wikileaks, in seeking absolutely transparency, will instead create the opposite, a completely militarized state where information is tightly controlled. For fear of losing the crown gems of military secrets, the government might lock everything up.


For Bruce Sterling, Wikileaks and its founder are the physical and political embodiment of the Internet, of a hacker culture that delights in the coolness of information and access without much worry for the real consequences. There is a hacker belief in the power of Truth, and the equation of Truth with a lot of information. But for ordinary people, not computer nerds or hackers, information is blinding light, bleach that destroys privacy and personality. The opposite side of transparency in democracy is discretion, the ability of the public servants to speak only so much of the truth, because the whole truth will lead to chaos, not freedom.


That is the essence of what Wikileaks has done to American diplomacy in the wake of Cablegate. I doubt that there is much surprise in professional diplomatic circles over the contents of the tables. The corruption and lasciviousness of world leaders makes for fun gossip for the chattering classes, but the most likely result is that foreigners will be leery of sharing their candid assessments with American diplomats, and diplomats will be worried about sending those assessments on. Mutual griping, gossiping, and speculating is required to build informal communities of trust (or authoritarian conspiracies), and cannot be sustained when diplomats must examine every word for its public significance, not just the joint statements made at the end of prolonged negotiations. The sphere for public thought and action has drawn smaller.


I keep faith in the hacker credo that information is power, that information wants to be free, and that information can set us free. But Wikileaks is only the first step; information must be used by people to impact the world. Wikileaks itself has become more canny about this in its four year history, strategically providing the most provocative documents to the mainstream media first, but a brief flurry of indignation over the state of the world is not a solution. Even with their corruption exposed, most of the people featured in the Wikileaks cables are effectively beyond the reach of the law. Wikileaks espouses one way of dealing with them, based on shame, paranoia, and ever escalating cyber-attacks. But shame is only relevant in the eye of an increasingly jaded and distracted public. Paranoia effects the institutions we rely on as much as it effects malefactors, and cyber-attacks are a dead-end arms race that will only make computers and networks less useful.


Rather than the antagonize the world, as Wikileaks and those in government charged with responding to it have done, we should use this chance to have a conversation, not a trial. The US government should publicly make the case why its actions in exposed by Wikileaks have been for the good of the nation, and the world. And if your arguments cannot withstand public scrutiny, then it is time to find new policies, and new goals. What we need is not transparency, but candor.


20101220

Soylent Gas is People

This being from a drunken conversation last night, about renewable energy.


There are 300 million people in America, 10% of these people have been designated as America's Greatest Renewal Resource (TM), and their patriotic duty is to be rendered into biodiesel to fuel our tanks, SUVs, and snow machines.

For parsimony, assume the average American weights 100 kg. This gives us 3*10^9 kg of raw material. Most of this is water, bone, other things that do not react, so assume 20% total reaction efficiency, we wind up with 6*10^8 kg of biodiesel.

This sounds like a lot, but in fact, it's only 7*10^8 liters of fuel, or 3.4*10^6 barrels of biodiesel. Less than one gulf oil spill. A mere drop in ocean.

Well, back to the drawing board.


20101212

OpenMaterials.org

OpenMaterials.org is a pretty nice website. Their blog is a nicely curated mix of science, art, hacking, and sustainability. I want to re-link a lot of the stuff they've covered, but you can head over and check out their website in full. Here are the top five cool things that I didn't know existed before today :

  1. Papercrete
  2. Appropedia
  3. Open Source Washing Machine
  4. Re:FarmTheCity
  5. embedding circuits in home made paper


20101208

Bayesian hallucination

Have you ever felt your phone buzz (when it didn't), or saw an email notification in the corner of the screen (when there was none)? Don't worry—you're not loosing your mind. 
 
This happens because the brain performs value-weighted predictive coding of unreliable sensory inputs. It can be explained in terms of optimizing costs and benefits using unreliable information from peripheral vision:
  • let $u$ be the utility ( benefit ) of responding to a notification,
  • let $c$ be the cost of verifying whether a notification is real or imagined
  • let $\Pr(\mathrm{present})$ be the probability that a notification is really there
Optimally, you should check a notification if the expected benefit of responding to the notification outweighs the cost : check notification if and only if $\mathbb E(u)>c$
[0] $\mathbb E(u) = u \cdot \Pr(\mathrm{present})$

[1] check notification if and only if : $u \cdot \Pr(\mathrm{present}) > c$
How does one know $\Pr(\mathrm{present})$ given some unreliable observation $\theta$ in peripheral vision, that is $\Pr(\mathrm{present}|\theta)$ ? This can be computed using Bayes' theorem : [2]
[2] $\Pr(\mathrm{present}|\theta)=\Pr(\theta|\mathrm{present})\cdot\Pr(\mathrm{present})/\Pr(\theta)$
So, $\Pr(\mathrm{present}|\theta)$ is the probability of observing $\theta$ when the notification is really there, $\Pr(\theta)$ is the probability of observing $\theta$ overall, and $\Pr(\mathrm{present})$ is the background probability of the notification being present. Plugging in expression [2] for $\Pr(\mathrm{present}|\theta)$ into equation [1] :
[3] check if and only if : $u \cdot \Pr(\theta|\mathrm{present}) \cdot \Pr(\mathrm{present}) / \Pr(\theta) > c$
Peripheral observations $\theta$ are noisy, and $\Pr(\theta|\mathrm{present})$ has different but overlapping distributions depending on whether or nor a stimulus is present. If the expected benefit from checking a notification is high, this can lower the threshold for checking a notification. The sensory system automatically optimizes unreliable perception into a (possibly inaccurate) high-level report for the parts of the brain that deal with behavior and attention.


Why Scientists Aren't Republicans

Dan Sarewitz writes one of those articles about something that we all know, and that should prove terrifying.


A Pew Research Center Poll from July 2009 showed that only around 6 percent of U.S. scientists are Republicans; 55 percent are Democrats, 32 percent are independent, and the rest "don't know" their affiliation...
Could it be that disagreements over climate change are essentially political—and that science is just carried along for the ride? For 20 years, evidence about global warming has been directly and explicitly linked to a set of policy responses demanding international governance regimes, large-scale social engineering, and the redistribution of wealth. These are the sort of things that most Democrats welcome, and most Republicans hate. No wonder the Republicans are suspicious of the science.
Think about it: The results of climate science, delivered by scientists who are overwhelmingly Democratic, are used over a period of decades to advance a political agenda that happens to align precisely with the ideological preferences of Democrats. Coincidence—or causation?

Of course, Dan's a political thinker, an iconoclast, a bridge-builder. He goes on to advocate that scientist endeavor to show that they are not mere political shills to conservatives. Scientists have an immensely trusted position in American society (above 90%), and it'd be a shame to throw that away.

I prefer to take the opposite tack. What is it about Republican politics that is anti-science? Could it be that conservative positions on the environment, public health, economics, national security, and the origins of the universe are so obviously counter to reality that no-one who considers themselves both a Republican and an astute observer of a real, physical universe? The level of cognitive dissonance required to maintain both literacy with the frontiers of science, and adhere to conservative ideology is completely unsustainable.

Even more deeply, perhaps there's something implicitly antagonistic about science and conservatism. Science relies on a belief that truth is contingent on What Is, and What Can Be Observed. It does not matter who postulated a theory, as long as it matches reality. And if a theory fails, then it, and all contingent facts should be discarded. Conservatism, the worship of the past and a desire for stability, is antithetical to this project of continually tearing down and rebuilding reality.

Perhaps a better question is: Given that the world today is scientifically and technically constructed, that scientific truths are the 'best' truths, that technological artifacts define our lives, why should we listen to a group which is so fundamentally anti-science?

Not everything is relative. Sometimes there are right answers.


A note on the binding problem

Edit : never-mind, this post is redundant to this much more comprehensive review.


There was, a few years ago, some debate on "the binding problem". This problem stems from the fact that distinct areas of the brain are specialized for extracting certain visual features. For instance, the brain regions that represent the location and motion of objects are far away from the brain regions that identify objects. Nevertheless, a running cat is not perceived as, disjointly, a cat, and a moving thing. Somehow, even though the parts of the brain responsible for semantically identifying object know nothing of location, and the parts of the brain responsible for localizing objects know nothing of semantic identity, we experience an integrated reality where specific things have specific locations.

To simplify, say you are presented with a spoon on the right and a fork on the left, and asked to retrieve the fork. So, somewhere in the brain is the notion "there are two things here, one on the left and one on the right" and somewhere else in the brain is the notion "there is a spoon and a fork here, but I'm not sure where". How the brain combines these two representations has been the subject of much speculation.

Some have proposed that populations of neurons responding to the same object become synchronized, such that neurons firing for "thing on left" and neurons firing for "fork somewhere" tend to fire at the same time, and this somehow unifies the two areas. I am skeptical of this "binding by synchrony" hypothesis.

I am skeptical because, when I am not paying attention, I am very likely to pick up the wrong utensil, and I suspect that attention is critical for binding. This argument hinges upon some assumptions of how the visual system works and what attention is.

The visual system is hierarchical. At first, the brain extracts small pieces of lines and fragments of color. These features are well localized, and "low level". Then, the brain begins to extract more complex features. These may be corners, curves, textures, pieces of form. This information is not as well localized. The combining of features into more complex features is repeated a few times, until you get to "high level" representations complex enough to identify whole objects, like "forks" and "spoons". As features get more complex, they loose spatial precision, until the where neurons that can identify objects really have no idea where that object is.

In the visual system, there is feed-back from higher level to lower level representations. Activity in high level representations can bias activity in lower level representations. You may be most familiar with this phenomena when you are day-dreaming. We are able to control, to some extent, the activity in most visual areas, and we thing that this control constitutes imagination. We have more control over "high level" visual areas. This control weakens toward lower level visual areas. For instance, primary visual cortex appears to be inactive in dreaming and visualization.

When we are awake, this top-down control is used for attention. Attending to an object will make said object "pop out" ( become more salient ). This enhanced salience may propagate from higher to lower level visual areas. For instance, if I focus on "fork!", the neurons that know there is a fork somewhere will enhance all fork-like mid level features, which will enhance fork-like low level features, and so on.

The key point here is that, by focusing on the identity of an object, I can increase the salience of low and mid-level visual features representing that object. Although the semantic part of my brain may have no idea where the "fork" is, it can make the low-level fork features pop-out. And, these features are well localized. Thus, the part of my brain that knows "there is something on the left, and there is something on the right", will find that the item on the left suddenly seems more salient. This seems sufficient to let the brain know where it needs to reach to pick up the fork.

This effect works both ways. If I ask "what is the object on the left", the neurons that know where the thing on the left is will make the features of the left object more salient, which will enhance the representation of the "fork" features in the part of the brain that can identify what objects are. Note that this effect doesn't need to be large, or make the "fork" dominate over all other objects in the scene. You simply need a brief increase in the salience of "fork" over background objects to know that the thing on the left is a "fork".

All of this happens rapidly and automatically. Binding is achieved by attending to high-level properties of objects, and therefore gating which objects get processed in other, distant, high-level areas. Attention ensures that at a given time only one unified object is most salient.


20101206

Report from Transforming Humanity

This past weekend (Dec 3-4), I attended the Transforming Humanity: Fantasy? Dream? Nightmare? Conference hosted by the Center for Inquiry, Penn Center for Bioethics, and the Penn Center for Neuroscience and Society. James Hughes and George Dvorsky of the Institute for Ethics and Emerging Technologies give their blow-by-blow record of the conference, but I'd like to step back and provide an overview of the field, and its position today.

The ability to use pharmaceuticals, cybernetics, and genetic engineering to alter human beings poses many complicated ethical, philosophical, and political issues about the potential deployment of these technologies. The attendees at the conference ranged from hardcore transhumanists, to left-wing bio-conservatives, and took a variety of approaches, from theology, to philosophy, to bioethics and medical regulation.

On the philosophical side, several speakers traced the philosophic heritage of transhumanism, and the demand to either find a place for man in the nature world, or the necessity of creating a unique standpoint, through the works of Thoreau, Sartre, and Cassirer. Patrick Hopkins of Millsap College gave an interesting lecture on a taxonomy of post-human bodies, Barbies, Bacons, Nietzsche, and Platos. Post-humans will have to find internal meaning in their lives in many ways, and while I appreciated the scholarship, there should have been more about the new intimacy of technology to the post-human, and its effects on daily life, beyond the obligatory references to Harraway's Cyborg Manifesto.

On the practical side, the Penn contingent (Jonathan Moreno, Martha Farah, and Joseph Powers) talked about coming developments in cybernetic devices, brain implants, and pharmaceuticals. As it stands, there exists no regulatory framework for enhancements. The FDA will only certify the safety of therapeutics, drugs that treat diseases, which means that a prospective enhancement will either have to find disease (medicalization, in the jargon), or exist in a legal limbo. Katherine Drabiak-Syed gave a great lecture about the legal and professional risks that doctors prescribing Modafinil off-label run. Despite American Academy of Neurology guidelines approving neuroenhancement, prescribing doctors are putting their patients at risk, and are violating the Controlled Substances Act.

Allen Buchanan opened the conference by suggesting that there was nothing special about unintended genetic modification, or evolution, while Max Mehlman of Case Western closed the conference by asking if humanity can survive evolutionary engineering. Dr. Mehlman posed four laws: Do nothing to harm children, create an international treaty banning a genetic arms race, do not exterminate the human race, and do not stifle future progress for understanding the universe. Good principles, but as always, the devil is in the details. International law has been at best only partially successful at controlling weapons of mass destruction or global warming.

To close on two points: The practical matter of regulating human enhancement remains highly unsettled, and leading scholars in the field are only beginning to figure out how we can judge the effectiveness and risk of particular enhancements on a short-term basis, let alone control long-term societal changes. The potential creators, users, and regulators of enhancement are spread across medicine, electrical engineering, law, education, and nearly every other sector of activity, and they are not communicating well. Basic questions such as “What does it mean to enhance?” and “Who will be responsible?” are unlikely to be closed any time soon.

On a philosophical level, the question of whether “To be human is to choose our own paths,” and “To be human is to find and accept your natural limits,” is unlikely to have a right answer. But Peter Cross was correct when he pointed out that even enhanced, humans will still need to find a source of meaning in their lives. If there is a human nature, it is to be unsettled, to always seek new questions and answers. The one enhancement we should absolutely avoid is the one that will make us content.


20101201

Belief-based certainty vs. evidence-based certainty

Over at [this] foum I noticed the following comment #10, which plays into some recent thoughts I've been having :
"Evidence-based certainty uses rationality to gradually prove or disprove theories based on empirical evidence. Belief-based certainty works in the other direction, the desired certainty is already known and rationality is abused to build on carefully selected evidence to “prove” that belief.

Belief-based certainty will always have a higher value socially and politically in the short term because it satisfies the immediate need for certainty and it is purchased by those who have the assets to afford it and have the most to lose.

Evidence-based inquiry is a process that only produces a gradually increasing probability of certainty in the long term. Facts will lose the news cycle but quietly win the cultural war."
I think in a very broad sense the narrative which "RFLatta, Iowa City" is drawing, and which Paul Krugman often uses to distinguish himself from those dastardly freshwater economists, is true, but should be taken with a grain of salt because it is a false dichotomy.

"Evidence-based inquiry" is surely what we ultimately want to point to when we talk about science and mathematics, but the process of how the sausage is made is obviously different in some important respects. An investigator knows he must collect evidence, but what are the right questions to ask? What are the right experiments to perform? These decisions cannot be made on the basis of hard evidence, since we haven't collected any hard evidence yet -- one must take existing hard evidence from other's experiments and then try to extrapolate to make a plausible prediction.

Indeed in computational learning theory too, we see the importance of this approach of "finding a plausible fit" to some of the data based on some unjustified assumptions, and then testing the hypothesis against other data.

The point is, we can't find a good fit until we understand the data, but we have to start somewhere, so where do we start? The answer is, generally, we start with our beliefs, and go with our gut.

In mathematics of course, having a good intuition is critically important. Famously for Godel, intuition was all important -- even though the Continuum Hypothesis is known to be independent of ZFC, Godel believed we can have set theoretic intuition about some of its consequences such that we should reject it as false. How Godel could possibly have cultivated such an intuition continues to be regarded as something of a mystery, depending on how much you read into it. Richard Lipton writes a nice blog post about all of this: http://rjlipton.wordpress.com/2010/10/01/mathematical-intuition-what-is-it/

Which brings me to a critical juncture -- what is the distinction between intuition and prejudice? My contention is that there is none, they are semantically equivalent and differ only in positive / negative connotation. I should mention another quote I am fond of which I may have disseminated previously:
"A great many people think they are thinking when they are merely rearranging their prejudices." -William James
How do we know when we are really meaningfully investigating an open question as opposed to just juggling around our prejudices? It really seems that at least some of the time, this may be the hardest aspect of science. I can certainly remember advisors on projects I worked on in the past who were pleased when, I lead myself to backtrack on some entrenched assumption I had made.

How do we confront issues like this when the question is something like P vs. NP, where now essentially 90% of the field believes P != NP, and takes the attitude "we know they aren't equal, now we just have to prove it"? In at least one talk I've seen, Peter Sarnak stuck his neck out and opined that this attitude is unscientific.

It seems to me that most of the time, we don't spend too much time arguing about intuitions, because it is largely unproductive. Use whatever mystical value system you want to guide your research, but if it doesn't produce results, you'd better toss it out the window, and it must yield to proofs. It's fine to believe "P != NP because everything is an expander graph", and get it tattooed on yourself in German if you want, but if it doesn't go anywhere... don't get too attached to your burdens.

So whats the moral? At this point, it seems to me that, mathematical intuition is a total myth, part of this silly hero worship ritual that we all seem to indulge in to some extent. Yet on the other hand, I've never known professors to disabuse undergrads or grad students of this idea. Indeed we even see really famous people like Godel, Richard Lipton, and Enrico Bombieri "indulging".

So perhaps as a reasonable hypothesis is that, we progress as follows -- when we are young we believe anything, when we are grad students, we become dramatically more skeptical, and then somehow with experience, we come around and believe again.

I just spent like 20 minutes trying to find this webcomic I believe I saw like this... it was either xkcd or smbc, one of these things where you have a graph showing how, either with age or amount of thought put into it, your belief in God begins very high, then plummets "how could god possibly exist", and then continues to oscillate between 50% and 0 for the rest of your life "oh that's how...".

Personally I don't find that to be the case wrt God, but I now think its plausible with respect to mathematical intuition.

And there we go again, extrapolating some kind of crazy oscillating curve based on two data points, some hearsay, and a web comic... fml.