Primer Part 2

As promised, part two of my technology in education post. Thanks to John and Cameron for their comments. In this article I’m going to develop the three core life skills that a person has to develop to become a good citizen capable of living a fulfilling life: self-discipline, cooperation, and reflective evaluation, and explain how educational technology can be used to develop them.

Self-discipline is simply the ability to do something which is not much fun in the short term, yet which you know to be good in the long term. It is a key-component of long-term planning and law-abiding behavior. Walter Mischel’s famous marshmallow study demonstrated a strong correlation between the ability to delay gratification at the age of four and future academic success and good life outcomes. David Brook’s recent column The Relationship School had an interesting vignette about discipline at the New American Academy: “Even though students move from one open area to the next, they line up single file, walk through an imaginary doorway, and greet the teacher before entering her domain.” The goal here is clearly to get students to treat normative barriers in the same way that they’d treat architectural barriers. While different people have different levels of self-control and discipline, it is clear that these are skills that can be learned. Indeed, character-building is the central premise of the KIPP schools.

Cooperation is the ability to work well with others. Human beings are intrinsically social, most tasks these days take place in a group, and even a ‘lone genius’ acts as part of a tradition of human effort. An MIT study on collective intelligence found that the skill of a group didn’t depend on the abilities of its individual members, but rather on the “the willingness of the group to let all its members take turns and apply their skills to a given challenge,” a quality the researchers deemed social sensitivity (which, as an aside, women have a substantial edge in). In preschool and kindergarten, teachers focus on social tasks like sharing toys, but education soon becomes very structured and individualized. Aside from team sports and group projects, we leave learning how to cooperate to ‘free time’ like lunch and recess, and leave teachers out of it. And then we wonder why kids are horrible to one another.

Finally, reflective evaluation is the ability to look at yourself, look at the world around you, and figure out what it is that you want to do with your life. It is the ability to appreciate aesthetics, to create, to explore, to seek clarity, to behave morally, to be driven by something other than shallow urges towards pleasure and away from pain. This is the fuzziest notion of the three that I have advanced, but it is connected with the Maslow’s idea of self-actualization, or Foucault’s goal of “becoming yourself”. Basically, we are happiest when we are doing something that we love and something that we are good at. The trick is figuring out what it is that you actually want.

So how can technology make enable these goals? The key features of the educational technology that I envision are a touchscreen tablet that can easily accept being linked to keyboards, mice, and things like microscopes and sensors for special assignments, a front-mounted camera that can identify the user, a microphone with voice recognition, GPS for locational data, and a system that monitors user activity. Expect it to be loaded with apps that are directly educational, like classic books, history lessons, and math and science worksheets, along with creative apps for art and music and programming, and the usual social networking and games.

Today, the majority of teacher's energy is spent making lesson plans, delivering lectures, and grading assignments. These core educational tasks are about transmitting information and checking that it is received. As they are currently performed, involve a massive duplication of effort, a waste of trained expert's time on menial tasks, and make it impossible to actually develop best practices.

Rather than the lecture, homework, and test model, a textbook app provides multimedia (text + video + extra clarifications) tutorials on the subject, followed by a series of practice problems. Rather than having to demonstrate lessons and maintain order in the classroom, the teacher has time to work with students individually, along with a wealth of data about exactly what in the lesson their students don’t understand. The class doesn’t have to run at the pace of the slowest student, each student can work through the lesson at their own pace and style: alone, in a group, outdoors, or late at night.

Completing a lesson gives you ‘credit’ that you can use to unlock the next lesson, games, and in some cases a useful tool. For example reading a book and writing about it lets you check out more ebooks from the school’s library, or audition for the play. Mastering addition, subtraction, multiplication, and division gives you a four function calculator. Trig and algebra give you a graphing calculator, and completing calculus gives you something like Wolfram Alpha. Completing a unit in science might allow you to sign up for a field-trip or experiment. Participating in a gallery show might upgrade your available art apps from Paint to Photoshop to Maya. Rewarding people for doing good work by giving them more of the same keys both intrinsic and extrinsic rewards.

Finally, I do believe that testing is important, but as it stands, we spend too much time testing students, and students spend too much time worrying about tests. Why not the student review the material as much as they want, and then when they’re ready, they go to a “Quiet Room” where talking is discouraged, put their tablets in test mode, and complete a worksheet just like the ones they’ve practiced with by their selves. In test mode, communication applications are locked down and the face-recognition camera makes sure that the right person is using it. If the student didn’t bother to prepare or scored poorly compared to their previous work, their teacher is automatically alerted.

Entire curriculums can be built out of a series of lessons of increasing complexity. Students and teachers can work together to develop an individualized lesson plan that meets state mandated minimum requirements for a certain number of hours of math and English and what have you, as well as a time-line of how the student should be progressing. A student that is behind on their lessons can be instantly located and given extra support. The goal is a total assessment environment that is both accurate and effortless, rather than the artificial hoops of the current system.

This model is based on the most addictive video games, which combine token economies with collecting activities and increasing difficulty. Make the lessons just hard enough to induce ‘flow’, and students will spend hours in the zone as they burrow through math lessons, scientific theories, and novels and histories. Because “textbooks” are all electronic and monitor how they are used, publishers can continually modify lessons at no cost to clarify issues that lots of students find confusing. And teachers have more time to work with students one-on-one or in small groups.

The conventional wisdom is that electronics, and particularly the internet, are making us perennially distracted and unable to focus. Won’t putting more computers in school make this worse, and lead to the opposite of self-discipline? Well, the internet is only distracting because there’s no reward for not making it distracting. When your email pings, you check it because checking it is rewarding and there’s no reason not to. Imagine that a student is using an educational app and gets a text from a friend (we might as well surrender to the fact that kids are going to text and use social media in school, because that genie is not going back in the bottle). Rather than look at the message, the student chooses to finish his or her lesson and respond in 15 minutes. Their tablet gives them a little smiley face and a ‘good job on good decisions’ message. Only small changes are necessary to make technology useful for concentration rather than distraction.

Through little nudges like this, discipline is taught as an incremental process. Early apps are very structured, and as a student advances they are required to learn how to structure their own education. For example, a ‘learn-to-read’ book might highlight each word in turn and ask the student to follow along out-loud, using voice recognition to demonstrate proficiency. A book for 4th graders might have multiple choice questions about the content at the end of every chapter. A book for middle-school students would ask for a series of short answers about specific elements of the book. And a book for high school students might not have any work attached until the end, where the student has to synthesize an essay about the major themes. At each level, the student is provided the tools to make a smooth transition to the next level.

And finally, we can encourage discipline by making providing a clear reward for success: Freedom! A straight-A student maintaining an ambitious course-load can be expected to have the tools to succeed on their own. A C student struggling with the concepts needs more support. So give the C student a lot of structure and a lot of supervision, and let them graduate to making their own decisions. With constant monitoring of activity, teachers can see instantly if a student is slacking off or excelling.

Teaching cooperation is hard, but my first premise is that since learning core skills will take less time and can be done in groups, students will inherently learn to work together. Students can learn to manage common resources like playground equipment, teaching them about sharing resources. Finally, virtual spaces offer a great way to learn the norms of collaboration. Minecraft has become a platform for monumental shared world building, including a recreation of The Lord of the Rings as a spiritual pilgrimage. Imagine using similar techniques to integrate students into a scientific and political study of local water quality issues, or an international sharing of cultures. And rather than just being seen by the teacher, student’s work could be presented to the entire class, and participating in constructive criticism part of an student’s job. Several of my colleagues have reflected that their students (ASU undergrads) write better for class blogs than they do on traditional papers because they know their audience and want to leave a good impression.

More formally, the MIT study linked above used electronic badges designed by the Media Lab to record the pattern of interactions in groups. Tablets with the right software could do the same, recording when everybody is talking. But this is overkill, and we need to encourage students to cooperate of their own free will with people nearby and over the net because it’s fun and useful.

On the final key skill, reflective evaluation is something that can’t be taught by a machine, a person has to find it on their own. But the process can be made easier by providing many different types of opportunities, and by continually probing with that most important question, “Why?” I hope that under this system, students will spend less time on basic skills, and more time doing in the real world working on interesting problems or pursing their passions in the creative arts. I hope that education can become more relaxed, rather than rushing from kindergarten to college without pausing to take stock. And I hope that it is a process that never ends: that even after passing their High School Certification Exam, people continue to access educational materials.

This model might be optimistic, but I don’t think it’s utopian. The technology is almost ready, requiring just a little more integration. The major problems are now political, with teacher's unions, national testing standards, and a belief in the intrinsic value of a high school diploma. The money for reform can be found: in 2008, according to Department of Education statistics, public schools spent an average of $10,441 per student for results which are at best average and are truly failing disadvantaged communities. We must do better. And the first step towards doing better is replacing 19th century information technologies with 21st century ones, and freeing our skilled and education professionals to do what they do best: inspire and mentor!

I’d also like to preemptively address two major concerns: surveillance and cyber-bullying.

Surveillance and privacy is a hard nut to crack. You should really see Bruce Sterling’s 2006 story “I Have Seen the Best Minds of My Generation Destroyed by Google” for a vision of a dystopia where “All our social relations have been reified with a clunky intensity. They're digitized! And the networking hardware and software that pervasively surround us are built and owned by evil, old, rich corporate people!” (because Chairman Bruce does it better than I can). But on the other hand, read your Foucault: schools are disciplinary environments that run on surveillance, and that can’t be stripped out of the educational system. If you want to educate the masses, you have to monitor and evaluate them.

But we can set up the surveillance humanely. Make only the highest level data available to end-users, like grades and time to completion. A full data review should be reserved for major correctional interventions. Anonymized usage statistics can be sent to publishers to improve software without major issues, as is already standard tech practice.

We’re operating in a world that ranges from fully public, like what Google knows about you, to semi-private (Facebook profiles), to absolutely secret (an encrypted file on a disk in a bank vault). Make an environment that supports varying degrees of privacy, and teach students how to use it from the beginning, from maintain a public profile, to a place for their friends, to a private site for their best friends, to their personal secrets, and then respect their privacy. Teenagers are finding themselves, and a desire for privacy is not a crime in and of itself. It should not be treated like one.

While the Cory Doctorow story Knights of the Rainbow Table is a bit of an exaggeration, in this age of Wikileaks and Anonymous, technological architectures cannot protect us. Only strong norms can.

As for cyber-bullying, it’s a real issue, but I see it more of an extension of normal bullying than as a wholly new phenomenon. Right now, cyberbullying can be hard to prosecute since schools (rightfully) don’t have jurisdiction over what happens outside their walls. But if platforms are substantially used for education, schools can intervene. While I am not an expert on bullying, my sense is that bullying is aggravated by the fact that we force children to spend their time packed together in small rooms with no exit. In a more tech enabled school, a bullied child could simply leave and find an environment that is more protective and conducive to learning. And on the internet, there’s a subculture for everybody, as Juggalos and Bronies so evidently prove.


Foresight: A Brief Literature Review

Several of my projects these days are centering on foresight, the process of looking at the future and doing something other than burying your head in the sand or screaming and throwing your feces. Foresight is not about predicting the future, because that’s impossible. Rather, it’s about cultivating a holistic and adaptive worldview so that when the inevitably unexpected happens, you don’t freeze on the tracks and get run down by the Future. Foresight is definitely more of an art than a science, but a lot of smart people have been writing on it for several decades. These are some of the articles I’ve found particularly useful.

Ron Bradfield, George Wright, George Burt, George Cairns, Kees Van Der Heijden. The origins and evolution of scenario techniques in long range business planning.
The authors trace out the history of various foresight techniques, from RAND, to Shell Oil, to the La Prospective school. They trace out the major features of the probabilistic modified trends and intuitive logics techniques, the two major schools of practice, and bring some clarity to the confused world of scenario planning. A key starting point for any foresight scholar.

James Oglivy. Future studies and the human sciences
Oglivy develops a broad philosophical justification for futurism as crafting normative scenarios. Slicing through almost every human discipline-anthropology, history, literature, philosophy, psychology, sociology-he notes a semiotic turn. Rather than following the physical sciences in discovering the laws of human activity, post-modernists have turned away from trying to discover truths towards interpretation meaning, and how meaning matters. The role of the futurist is to make the future meaningful in a positive and useful way, to deconstruct both status quo futures and dystopian visions, and give voice to the desires of the public for a better tomorrow.

Cynthia Selin. Professional dreamers: The past in the future of scenario planning.
The founding document of scenario planning is The Gentle Art of Re-Perceiving by Pierre Wack. Wack was a Shell Oil executive, futurist, and mystic, who developed the technique of scenarios to allow his fellow executives to see beyond their narrow disciplinary boundaries and approach problems from a fresh perspective. Wack’s aim was organizational learning, and so when developing scenarios we should keep in mind the organizations that we are trying to change. Scenario planning today has deviated from Wack, as most scenario practitioners are independent contractors, lacking Wack’s deep understanding of the field as well as his reflexive orientation. But even if we cannot follow directly in Wack’s footsteps at all times, he represents an aspirational goal.

Cynthia Selin. Trust and the illusive force of scenarios.
What makes a scenario ‘good’? It’s definitely more of an art than a science, and in this article Selin argues that an effective scenario must be both provocative and trustworthy. Yet, the traditional metrics of trust: past performance, adherence to best practices, an overwhelming weight of evidence, do not apply to scenarios. Scenario making is a rhetorical art, where the practitioner must foster trust among all participants, and use metaphor and narrative to bridge the gap between the familiar present and the uncertain future.

Ronald Bradfield. Cognitive barriers in the scenario development process.
This article attempts to bridge cognitive psychology and scenario planning by exploring the cognitive aspects of how scenarios work and do not work. Human beings are lousy thinkers, and we tend to devolve to variety of heuristics rather than in engaging in proper analysis. The key finding are that a group engaged in scenario planning draws upon ready-made scripts from the media (news, fiction, science), and that they reach a transition point where they close on a concept, and can no longer be influenced. While I’m not qualified to vouch for the quality of the cognitive psychology, foresight needs more analytic articles like this one.

EDIT: And thanks to Cynthia Selin for pointing me to these articles in the first place. There's a reason you ask the experts.


A Primer on the Future of Education

In the Diamond Age, a novel full of astounding technologies, the clear star is the Young Lady’s Illustrated Primer. Sure, nanobots and cyborg implants and geological engineering and kitchen 3D printers and--well, you get the point--are cool, but the Primer--the talking, moving, intelligent and empathetic book that educates Nell--is at the heart of the novel. More than a McGuffin, the Primer links all the characters together, and plays a pivotal role in the explosive conclusion.

Primer technology is so awesome that it one of the moonshots for the SOOPER SEKRIT PROJEKT, which is why I’d like to take a closer look at it. I know two things for sure: 1) computing technology is getting better all the time, and 2) our model for education is broken.

Let’s start with the technology side. Cellphones have achieved basically universal penetration. A decently powerful computer is within the budget of the global middle class, and there are many people trying to make it more accessible all the time, from the One Laptop Per Child computer, to the $50 Indian Education Ministry sponsored Aakash Ubislate, or about $300 for an Acer or Asus netbook. These aren’t particularly great computers, but they’ll let you do access the internet, watch videos, create and edit documents, and learn to code. Not that there aren’t very real problems with getting electricity and bandwidth to the poorest billion, but there are also lots of very dedicated people and organizations pushing on the technology. It won’t be as slick and indestructible as the Primer, but the hardware is definitely getting there.

The second side is education. According to Bruce Mau, higher education is accessible to only 1% of the world’s population. Schools are underfunded and overcrowded pretty much everywhere outside of a small group of wealthy post-industrial countries, and then you get ridiculous soul-crushing South Korean study mills. If you believe that education is a pre-requisite to living a good life (and I do, by and large), then we have problems. Big problems.

I’m not alone in this assessment, and there is a tidal wave of innovation directed towards applying the power and scale of information technology to education. Khan Academy, MITx, and ShowMe are some of the bigger names, but probably the most innovative players are Sebastian Thrun and Peter Norvig, the Stanford professors behind the 160,000 student open access graduate level course, CS221: Introduction to Artificial Intelligence. You should really just read the whole article about it at Wired Science, but I’m going to pull out the crunchy philosophical bits.

After seeing Khan at TED, Thrun dusted off a PowerPoint presentation he’d put together in 2007. Back then he had begun envisioning a YouTube for education, a for-profit startup that would allow students to discover and take courses from top professors. In a few slides, he’d spelled out the nine essential components of a university education: admissions, lectures, peer interaction, professor interaction, problem-solving, assignments, exams, deadlines, and certification. While Thrun admired MIT’s OpenCourseWare—the university’s decade-old initiative to publish online all of its lectures, syllabi, and homework from 2,100 courses—he thought it relied too heavily on videos of actual classroom lectures. That was tapping just one-ninth of the equation, with a bit of course material thrown in as a bonus.

Thrun knew firsthand what it was like to crave superior instruction. When he was a master’s-degree student at the University of Bonn in Germany in the late 1980s, he found his AI professors to be clueless. He spent a lot of time filling in the gaps at the library, but he longed for a more direct connection to experts. Thrun created his PowerPoint presentation because he understood that university education was a system in need of disruption. But it wasn’t until he heard Khan’s talk that he appreciated he could do something about it. He spoke with Peter Norvig, Google’s director of research and his CS221 coprofessor, and they agreed to open up their next class to the entire world. Yes, it was an educational experiment, but Thrun realized that it could also be the first step in turning that old PowerPoint into an actual business…

He’s envisioning his own digital university, with a less conventional curriculum, one based on solving problems, not simply lectures on abstract topics. It would offer a viable alternative for students of the global one-world classroom—particularly those who lack the resources to move to the US and attend college.

Thrun decides that KnowLabs will build something called Udacity. The name, a mashup of audacity and university, is intended to convey the boldness of both Thrun’s and his students’ ambitions. His goal is for Udacity to offer free eight-week online courses. For the next six months or more, the curriculum will focus on computer science. Eventually it will expand into other quantitative disciplines including engineering, physics, and chemistry. The idea is to create a menu of high-quality courses that can be rerun and improved with minimal involvement from the original instructor. KnowLabs will work only with top professors who are willing to put in the effort to create dynamic, interactive videos. Just as Hollywood cinematography revolutionized the way we tell stories, Thrun sees a new grammar of instruction and learning starting to emerge as he and his team create the videos and other class materials. Behind every Udacity class will be a production team, not unlike a film crew. The professor will become an actor-producer. Which makes Thrun the studio head.

He’s thinking big now. He imagines that in 10 years, job applicants will tout their Udacity degrees. In 50 years, he says, there will be only 10 institutions in the world delivering higher education and Udacity has a shot at being one of them. Thrun just has to plot the right course.

Yikes! If education is about transferring knowledge from teachers to students, i.e. information transmission via some sort of web app, then we know from observation how that process plays out. With Google, Amazon, Youtube, Facebook, and so on, one firm establishes technological superiority, gains a larger market share, and then just eats everybody else. Education is actually a fairly conservative business, (the oldest continually operating institutions on the planet go: the Catholic Church (~2000 years), Medieval Universities (~1000 years), and the a bunch of Johnny-come-lately corporations and governments), and it’s based a lot of prestige and momentum, but web apps are very cheap to develop and operate compared to a traditional university, and they are much more scalable. The only real barrier is credentialing, the process of giving somebody a piece of paper that says that they’re qualified to do something, and as soon as the education app developers figure out the politics of their credentialing system, the whole edifice of higher education is just going to blow away.

Universities are increasingly expensive, lousy at teaching useful skills, and produce a worthless credential. And all this is doubly true for American primary and secondary education. It won’t take much innovation to make something that is a lot cheaper, and has comparable or even better educational outcomes.

The stage is set for something like the Primer to actually come about. It won’t be slick and seamless like in the novel, but the continuously improving combination of hardware and software that we see in real gadgets can make an educational platform that is cheap, accessible, and able to a take a student from kindergarten to a bachelor’s degree. The technological problems are essentially solved.

But wait, education isn’t just about information transmission. Schools do more than teach facts and theories, they are factories of socialization. They produce citizens. What kind of people will tablet educated students grow up to be? There’s an echo of this in the Diamond Age, where the most impressive feat of the Primer is not that Nell knows kung fu, or computer programming, or nanoscale engineering, or even how to get along in NeoVictorian society, but that the Primer creates a Mouse Army of 100,000 Chinese orphans who are capable of acting as a perfectly coordinated network. We spend only a little time with the Mouse Army girls, but on reflection, they are a profoundly strange society.

Now, I’m not going to defend the kind of socialization that happens in the American school systems that I’m familiar with. I think it’s often dominated by the most pathological personalities, both students and teachers, and results in trauma rather than personal growth. Much of what made me who I am today happened far away from the classroom, and from the structured process of education ((And I had, objectively, one of the best educational trajectories possible, from pre-school to Oakwood to Caltech, Vassar, and ASU)). But socialization has to happen, in fact, students are going to be socialized in some way whether we want it or not.

The question that I therefore pose to you, my loyal readers, is what kind of citizens do we want our schools to produce? How can we best socialize students for the future? And how can new educational technologies and our legacy systems work together to maximize opportunity for all?

I have my own theories, which I’ll try and explain later in the week, but I want to hear from you guys first.


The Affective Component-A John Carter Movie Review

Last weekend I saw John Carter, an action-adventure blockbuster based on a series of early 20th century books that are considered to be some of the foundational works of science-fiction. I enjoyed the movie well enough for what it was, an effects driven spectacle without much substance, but it looks like it’s going to lose money. A Lot of money. And that means Hollywood is hunting for somebody to blame. It’s the usual story of mismanagement, poor marketing, and an unwilling audience, but maybe we can learn something useful about science-fiction and story-telling.

A little background from the LA Times

Instead, with a weak opening this past weekend, Wall Street analysts expect the company to take a $165-million loss on a movie that has joined "Heaven's Gate," "Ishtar" and "Howard the Duck" in the constellation of Hollywood's costliest flops.

What happened? The very things Disney thought would guarantee box-office success may have left "John Carter" star-crossed from the start. The acclaimed director had never made a live-action movie before. The executives guiding and helping market his movie were new on the job and had limited experience running movie divisions. And the source material, written beginning a century ago by Tarzan creator Edgar Rice Burroughs, had already been so picked over by its admirers that critics and audiences found the film hackneyed and stale…

By the time "John Carter" started filming in January 2010, however, Cook had been replaced by Rich Ross, a television executive who had never overseen a film of this scope. Ross named as president of production Sean Bailey, a movie producer who lacked experience as a studio executive, then installed MT Carney, an outsider from the New York advertising world who'd never worked at a studio, as marketing chief. Then Carney left in early January and was replaced by veteran Ricky Strauss — just as the film's promotional efforts were to kick into high gear.

The geek contingent on the internet is blaming bad marketing (and the marketing was truly awful, despite a Superbowl ad and $100 million budget) and political intrigue at Disney. Some critics say that audiences don’t like science-fiction films. And the studios are trying to turn director Andrew Stanton into a whipping boy for the flop. But the simple fact is, John Carter wasn’t a particularly good movie. I can’t tell you who’s to blame for that, but I can try and explain why the movie flopped, and in one word, it’s the characters.

The best comparison for John Carter is Pirates of the Caribbean (credit is due to Marci for pointing that out to me): Both movies are Disney-produced action-adventure flicks based on slightly silly material (a set of 100 year old books, and a theme park ride). Pirates, however, was a massive smash, and went on to spawn a series of increasingly bad sequels.

What made Pirates of the Caribbean so good was the quality of the characterization. Not necessarily their depth or the subtly, but the way that the personalities and desires of the characters drove the plot: Will Turner wants to become a hero, Jack Sparrow wants revenge on his traitorous first mate, Elizabeth Swann is torn between being a responsible English lady and a life of adventure, and the villainous Captain Barbossa wants to break the curse and kidnaps Elizabeth Swann to do so. Jack and Will team up to get her back, and the story basically writes itself.

In John Carter, the eponymous hero at first wants to get back to Earth and his cave of gold, then he wants to save Dejah Thoris from being murdered at her wedding. Dejah doesn’t want to get married to the brutal warlord Sab Than (but does so to save her home city of Helium), who is being controlled by the truly evil Therns as part of a plan to destroy Barsoom and feed off the destruction.

You see the problem? Nobody in John Carter has a clear motive, or a point where they have to make a meaningful decision, or even an opportunity to come into conflict. Everybody is just set up at the beginning, and they bash together, and stuff happens, and the movie ends. There’s no reason to get involved with the characters, care about what happens, or give the movie more than a second of thought once you leave the theater.

Now there are more differences between the movies. You might argue that the actors in Pirates were just better, and this charisma carried over to their characters, and to be fair, Depp’s Captain Jack Sparrow was amazing. But are you really going to say that Orlando Bloom and Keira Knightly are great actors and head-and shoulders better than John Carter’s Taylor Kitsch and Lynn Collins? They're all pretty and young and otherwise unremarkable in my opinion. John Carter has some of the best visual design I’ve seen in a long time: the airships, the barbaric Tharks, the landscape, and the cities of Zodanga and Helium are beautifully rendered. But graphics can’t save a movie without a clear heart.

I have a lot more complaints about John Carter: The over-abundance of sidekick characters who ate up screen time; The tragic underuse of Dominic West (McNulty-The Wire) and James Purfoy and Ciaran Hinds (Marc Anthony and Julius Caesar from Rome); poor pacing and tactical sense in the action sequences, the marketing campaign (Oh god, the marketing campaign. Has Hollywood forgotten how to put a beautiful and badass lady on a poster? Did somebody take their testicles away or something? And don’t get me started about the Superbowl ad). But this would divert from my main point.

People want to relate to other human beings, or their fictional representations. They need to have motives that the audience can comprehend, that they can in some way link to their own lives and experiences. Now this is all melodrama, so you don’t need many many layers of complexity, but we like a little conflict and indeterminacy. Is Jack Sparrow a dashing rogue or a ruthless pirate? Who will Elizabeth Swann fall in love with? Will Will Turner acknowledge his father the pirate? This mystery and suspense sustains the audience’s interest through the slack periods.

John Carter doesn't have anything like that. The motives are all negative, about not wanting to do something, or wanting to smash and destroy. The characters have a single layer of personality and no real internal conflict, and since the good guys and bad guys are so obvious, there’s no point in thinking about it. Everything that happens and that they see is laid out for you in expository dialog as it happens, which kills the mystery of exploring Mars (compare this to the first scenes on Pandora in Avatar). There only mystery is in the framing narrative with a young Edgar Rice Burroughs.

What bothers me is that there’s an actual core of a good movie in John Carter. Make Carter more excited about being on a planet where he is a superhero. Give Dejah Thoris a moment where she genuinely considers marrying the enemy warlord because she wants peace. Be more clear about how terrible Sab Than will be if he becomes the supreme ruler of Mars. Cut down the extraneous characters and make Tars Tharkas or Kantos Kan the sidekick. Throw in some Indian Jones style tomb raiding to explore the weird history of Barsoom (In the books John Carter kills a god who’s religion consists of eating the souls of pilgrims. Why that wasn’t Act II of the movie I will never know?). And if you’re making a science-fiction blockbuster, don’t try and hide the fact (they removed 'of Mars' from the title, reportedly because it didn't test well with women, leaving the utterly generic 'John Carter'), glory in it! A movie with aliens and flying ships is science-fiction, and science fiction does pretty good at the box office.

Science-fiction is the literature of wonder. A movie like John Carter is supposed to be amazing, it is supposed to leave the audience dazzled, it is supposed to show us the unknown and how cool that is. But too much wonder leaves us dazed and confused, future-shocked into numbness. In the face of the unknown, we need a fixed point to hold on to, and that anchor is humanity, characters that are clear, likeable, and relatable, characters who act a window into a new world. Screw up the characters, and you screw up the audience’s affective connection to the story. And if there isn’t that affective component, if they don’t care, they won’t go and see your movie.

It’s simple as that.


Science Fiction Prototyping: A Preliminary Assessment.

This is day 2 of the EMERGE event write-up, and my reflections on the workshop, Science Fiction Prototyping with Brian David Johnson, Intel futurist and director of The Tomorrow Project.

I believe that science fiction can be an incredibly powerful tool for shaping public perceptions towards emerging technology. Governance involves assessing risks and policies and making decisions between options, but how can we assess an emerging technology when such basic information as costs, benefits, and consequences are unknown and perhaps fundamentally unknowable? One of the major findings of STS is that supposedly value-neutral methods like cost-benefit analysis and linear extrapolation of trends in fact contain large implicit biases towards certain kinds of ‘valid knowledge’ and ‘rational outcomes’, and more-over, these methods fail to deal with major uncertainties, whether they’re Black Swan events like the collapse of the Soviet Union, or more subtle systemic shifts, like the rise of cellphones and social media in politics.

But the real strength of science-fiction is its broad appeal. Very few people read the white papers produced by bodies like the National Academy of Sciences, the Congressional Office of Technology Assessment was shut down in 1995 by Newt Gingrich’s “Contract with America” Congress, and the public engagement efforts like EU’s CafĂ© Scientique are considered blockbuster successes if they reach tens of thousands of people. Popular science-fiction, whether in film, game, of print form, reaches billions of people world-wide. My own work on nanotechnology, biotechnology, and the space race has shown the critical role that science-fiction stories have played in framing the policy debate.

People are narrative thinkers; they naturally organize their world into stories, and understand when a story makes sense, and when it does not. By combining realistic characters and social milieus with novel technology, science fiction can engage multiple ways of thinking, and draw out underlying values and sites of conflict and confusion. There are no barriers to participation, anybody with a pen and paper can write, anybody with an internet connection can publish. Science fiction is technology assessment for the rest of us.

But all of the above are just my idiosyncratic and scattered jottings towards some sort of coherent foresight methodology, which is why I was really excited to see how the professionals did it. I’d read Brian Johnson’s book previously, and my impression was the he was on to something, but he hadn’t bothered to write it down.

The first day, Brian delivered a lecture on science fiction prototyping and how to do it. The key points were:
A) The minimalist vision of the future is wrong, because it looks like a prison
B) People like clutter, houses are hairy, look at what makes people comfortable
C) The extremes are what makes a story interesting
As we broke up the day, he instructed us to think about what kind of story we wanted to tell, and gave us the 5 Step Plan for science fiction prototyping.


This is the diagram in the book, and you’ll note that it’s incredibly skeletal and linear. The abbreviated plan for Scenario Development has 8 steps, and requires that you examine both your own biases and purposes, and pretty much every shaping external force in the world. Science-fiction prototyping asks that you dive write in.

The actual process of science-fiction prototyping only sort of matches the diagram above. This is what I experienced in the process of making my prototype.


I want to make some notes here on what worked and didn’t.

The envisioning process depends on the information you have access to: What you know about science and technology, your own life experiences and beliefs, and any materials provided by the organizers. At EMERGE, despite the disparate disciplinary backgrounds of the workshop participants, we were all academics interested in the future, and we had all had the same full day of presentations and lectures.

Pitching and dialog are definitely learned skills, and different people have very different levels of aptitude at them. Some people can’t express a story concisely, others dominate the discussion, and some are simply boring and unknowledgeable. We worked in groups of between three and five people, which allowed everybody to participate in the dialoging process. Unsurprisingly, Brian David Johnson was far better at these tasks than the rest of us. Just a few minutes with Brian could clarify the key issues at in the prototype, and the best way to bring them to the forefront.

Development, the part where you write, draw, film, or otherwise produce the prototype itself, appears to be inherently time-consuming and isolating. Everybody (except for a group working on a comic, which had a clear division of labor), retreated to their laptops to write their own stories. Most people had full outlines, but writing fiction is hard; one manuscript page an hour is a very optimistic rate. Judging from my previous writing workshops, it can take up to a month for an amateur writer to get a 3000 word story into some kind of readable form. The single day we had allocated simply wasn’t enough.

Finally, prototypes are useless unless you bring them out into the real world somehow. In our report out, we pitched the prototypes to the rest of the group, who then asked questions, and tried to nail the prototype down to its essential core. By this point, it was late in the day, we were tired and hungry, and the quality of the discussion suffered. A second pitch attempt with a completed draft is important, but in our case, we could have used more structure and time for the reporting out.

The biggest impression that I got from the workshop was that there’s a lot to science fiction prototyping that isn’t in the book. The relies on tacit knowledge about science, technology, people, institutions, narrative structures, the creative process, and proper presenting and critiquing skills. There’s nothing wrong with tacit knowledge; indeed, the world would collapse without it. The problem with relying on tactic knowledge for foresight is that your visions are going to be infected with unexamined biases, and may confirm what you want to know rather than challenge and transform your vision of the future. The only check against this bias is the skill of the other participants in the process.

Making the tactic knowledge that goes into science fiction prototyping explicit would make for stronger prototypes. This diagram has just some of the invisible entities that surround the prototyping framework.


Science fiction prototyping is definitely useful, but there are many questions which should be answered before I’d be willing to fully trust it as a foresight methodology.

Some questions are procedural: What is the best preparation before going into the prototyping process? How should information and questions be framed so that non-practitioners find it productive? How can you train people to pitch and critique ideas more effectively? Is there a way to develop the prototype that is faster than writing a whole story around it? How can the constructive process of dialog continue throughout the development cycle? How can individual communicate a prototype to a group in an impactful way?

Other questions are related to the core concepts of science-fiction prototyping, and are harder to resolve: What is the proper way to develop the technology through the course of the story, is it a character, a prop, or something else? How does an author recognize their biases and blind spots? How can science-fiction prototyping be used to prompt reflexive deliberation on the future? What does the dialog involved in prototyping imply for the authorship of the work, and the origins of its ideas? Does one need to make science fiction prototypes to find them useful, or is consumption of the right kind of science-fiction adequate for foresight?

I don’t have good answers for these questions now, but I hope that over the course of the next few months, I can finish my own prototype and resolve some of these theoretical and procedural questions. And any thoughts my loyal readers have on this would be very welcome.


Ten Books for the Future

“The problem is that science-fiction writers have stopped writing new futures and just started rehashing the past.” "No, the problem is that scientists and engineers have stopped doing exciting things." As I understand it, Michael Crow and Neal Stephenson had an exchange like this at a Future Tense conference about a year ago. I might not have the wording down right, but I agree with the sentiment entirely. Our leaders are drifting aimlessly towards a future of debt so large that money loses all meaning, paranoid overreactions to boogeymen like ‘international terrorists’ and ‘internet pedophiles’, a decaying industrial infrastructure on which we are all reliant, and an increasingly autonomous culture of radical novelty, self-expression, and technological change. But hey, they’re politicians. What do you expect, some kind of vision thing?

The problem is that one vital place where we as a culture might look towards some sense of futurity, science-fiction, has become increasing generic, old-fashioned, and basically nostalgic rather than forward thinking. Disney, which is a good indication of the cultural pulse of America, is stellar example as the original Space Age, techno-utopian Tomorrowland was revamped into a Jules Vernian steampunk nostalgia trip.

With all that in mind, I’d like to put together a bibliography for the people looking to use science-fiction to influence the future. I’m a science-fiction fan, a science policy scholar, and history buff, and this is my idiosyncratic list of 10 books that everybody should read if they want to understand Science, The Future, and how we’re going to get there.

Paolo Bacigalupi - The Windup Girl

How can I even describe this book? The Windup Girl won the Hugo, the Nebula, the Locus, and the John W. Campbell Award. It’d be easier to list best-SF-novel awards it didn’t get. Set in a Thailand teetering on the brink of collapse, Bacigalupi paints a picture of a world where the oil has run out, global trade has collapsed, science has stalled, and the horsemen of plague, famine, war, and climate threaten to smash what little remains. Global warming has permanently altered the climate. Agriculture remains barely one step ahead of rogue genetic plagues unleashed decades ago, and only the fading expertise of big Midwest biotech consortiums keep the world fed. Yet giving into the Calorie Men means giving up national autonomy, something which proud Thailand will never accept. The novel follows a complex cast of characters, Thai environmental police officers, an agent for the biotech concerns looking to loot a hidden seed bank of its genetic riches, a Malaysian exile seeking to rebuild his fortune by any means necessary, and the titular Windup Girl, an abandoned genetically modified “New Person” forced into sex slavery. Even in a world on the brink of collapse, people still want what they’ve always wanted: Money, power, ideological success, or love. But at the end of the day, the Future is going to be born, whether we like it’s shape or not, and new beasts will live in the ruins of our cities.

When I read The Windup Girl, I couldn’t stop shaking. I could smell the elephant shit, feel the desperation, know the inexorable trajectory of our technological crimes against nature. I’m afraid that The Windup Girl is going to be our future, and that’s why you have to read it.

Bruce Sterling - Distraction.

I’m holding myself to one book per author, and picking the right Sterling is no easy task. But I choose Distraction because A) it’s about a political operative trying to fix a white elephant scientific installation (A giant airtight dome and bioengineering laboratory in East Texas) and wandering into something far deeper, and B) even after a decade, it still smells like The Future to me. American politics has become an absurd carnival, invisible networks of dissidents do strange and terrible things to corrupt financial institutions, pretty much everybody is broke, but if you have money you can live like a king. And if you don’t, life is Burning Man! And somehow, in the midst of all these brilliant fragments of futurity, Sterling manages to tell a story about the American Soul, about what we need from our leaders, and about how science is remaking the world.

Charles Stross - Singularity Sky

The New Republic is an interstellar empire that Bismarck would love: Obedient peasants, heroic soldiers, honorable aristocrats, and none of that nasty disruptive technology; nothing more complex than telegraphs and nuclear powered steam engines, and they’re willing to do anything to keep it that way. So when an interstellar fleet of post-humans arrives over one of the New Republic’s colonies and begins dropping cellphones and nanoreplicator cornucopias from the sky, it ranks as a major breech of national security. But in this universe, God (or at least a super-human AI that use time-travel in its computation) is watching, and it doesn’t want the regressive militaristic morons of the New Republic to do anything too stupid. Which is why two interstellar spies, one working for the UN and the other working for the Eschaton, have to figure out what’s going on and stop it before the Big E decides to clean up the whole mess by plowing a comet made of anti-matter into the planet. Power, politics, panopticons, the terrorizing liberation of a true post-scarcity economy, and some of the most kickass and realistic space combat combine to make this my favorite book about The Singularity and what it might mean to you.

James C. Scott - Seeing Like a State

What does a state require to govern? What does the process of being governed entail? Before a state can rule, it must be render its subjects visible and record them with maps and censuses. Scott explores an ideology he calls high modernism, which aims not just to record things, but to change them to make them more visible, more legible, and more controllable by a central authority. But from the sterile new cities of Brasilia and Chandigarh, to mono-cropped farms, to Soviet industry and Tanzanian rural development, the modernist ideology that tries to render everything down to single-function units inevitably distorts and damages the subtle and complex fabric of society. The more heavily anything is planned, the more it is sustained by the informal sector. Scott’s reminds us to reflect on our own work and ask: what are we making visible? What is being obscured? What necessary stories are not being told?

Neal Stephenson – Diamond Age

Some people think this book is about the social implications of nanotechnology. These people are wrong, or at least they’re missing what I think are the most interesting parts of the book, which are about how we create identity in a globalized world. The Neo-Victorian aesthetic, the rituals of the Pacific Northwest Software Khans, and even the Primer-educated Mouse Army are all different attempts to craft personal and group identities in an era when borders have melted and the means of production have become entirely disassociated from human hands. Once the making of things becomes effortless, all that’s left is the making of stories; what kind of stories do you want to tell?

Neil Sheehan - A Bright Shining Lie

Those who don’t study history are doomed to repeat it, but some parts of history are more fruitful than others. The Vietnam War was the high water mark of American power and faith in the wisdom of our politicians. It’s where the American Dream turned sour, and we still suffer from the cultural wounds. The Vietnam War is like a fractal of horror and unintended consequences. Every level echoes the lies, short-sightedness, and bad decision-making of every other level, from the grunts fighting at Khe Sanh to the generals and presidents running the war from Washington D.C. A Bright Shining Lie covers every level of that war, following the career of John Paul Vann from his role as a lowly military advisor at the disastrous battle of Ap Bac to his madcap triumph as the absolute military authority in I Corps during the 1972 Easter Offensive. The corruption of the war is mirrored by Vann’s personal fall, the national quagmire become ones man quixotic quest to save a foreign nation. If you were to read just one book about America after 1950, this would be it. Vann makes Colonel Kurtz from Apocalypse Now look like an amateur at going Up River and Never Coming Back.

China Mieville - The Scar

China Mieville writes about monsters: ambitious, fearful, oppressed, misguided, occasionally generous or brave monsters that have beetle heads, immense wings, chlorophyll for blood, mechanical parts, or sorcerous talents. In other words, people just like us. His richly imagined stories put a Socialist and Anarchist spin on the fantasy tropes, and in The Scar, an exiled translator is kidnapped by the exotic city of Armada, built on the backs of ancient ships from 1000 nations and ruled by brutal pirates. The diversity of the races, the novelty and depth of the world building, and they way that ordinary concerns are filtered through the lens of pulp adventure simply has to be read to be appreciated. I can’t think of any other author who deals as well with ideas of social justice, imperialism, absolute power, or what a single person can do in the face of History.

Peter F Hamilton – Fallen Dragon

These days, Peter F Hamilton is known for writing incredibly long space operas. But before he got on the six-book series kick, he wrote this philosophical military-SF novel that follows a space marine from his privileged upbringing on the sole successful interstellar colony to being a foot-soldier for “asset-realization raids” (aka, Interstellar Corporate Piracy backed up with powered armor and orbital lasers), to attempting to retrieve his own broken past in a desperate battle against his corporate masters and a native insurgency. Hamilton invites us to consider the economics of spaceflight while at the same time exulting in the joys of exploration. The planets of the novel inspire reflection: the tired homeworld of Earth, the blank slate of Amethi, the tropical freedom of Thallspring reproducing the failures that came before, and the post-human threat of Santo Chico. Hamilton doesn’t hammer this point home, but the novel also has many interesting reflections on how governments and corporations interact, and how people might modify themselves to wield power or achieve liberty over generation through cloning, brain transplants, cybernetic links, and even more exotic modifications.

Robert Charles Wilson - The Chronoliths

In early 21st century Thailand, a 200 foot stone monolith appears in the jungle, it’s arrival heralded by a destructive blast of freezing air. The monolith is a monument to the victory of a warlord named Kuin, celebrating a battle 16 years in the future. Soon, Kuin monoliths are landing in major cities, killing millions, shattering nations, and sending the world towards a global holocaust. But as the years march on, the identity of “Kuin” and the means by which he launches his weapons remain unknown. The main character is drawn into a battle by scientists, philosophers, and unclassifiable ordinary people to save the world from destruction at the hands of duped Kuin cultists, seeking any surety they can find, even in the destruction of their lives, and the mysterious conspiracy behind the attack. An amazing journey into the relationship between the present and the future, the mutability of tomorrow, and the power of belief.

Bruno Latour - Science in Action

Sometimes, when you need theory, you just have to turn to a Frenchman. By and large, nobody in politics actually understands what science is, how it works, or the kinds of questions that it can answer. Latour uses a combination of lab ethnography and Actor-Network Theory to explain how facts gain their facticity, the characteristic of being accepted as true by the broader community. Inscriptions, networks of people, things, and ideas, and conflicts between the durable and the transient all serve to distinguish the uncertainty of “science-in-the-making” from the absolutely truth of “ready-made-science.” Science in Action is a dense book, but if you read it closely, it will explode your conception of scientific knowledge and replace it with a much more powerful and flexible framework. If you want science-fiction to be more than gadget fetishism, you’ll need an epistemological account like Science in Action.


EMERGE Impressions Day 3

The curse of grad school is that there’s always something else to do. I was finally able to grab a moment from the endless treadmill of readings to write up the rest of the EMERGE conference. Day 3 was a combination of keynote addresses and report-outs from the working groups. By and large, the keynotes were more interesting, so I’m going to focus on the keynotes and my responses. As it turns out, creating interesting design fictions in 24 hours is hard.

I arrive at Neeb Hall at the blessedly late hour of 9:30, coffee in hand. Neeb is the singlest biggest auditorium on campus, and it is nearly full. Fortunately, I manage to grab a seat in the front by my friend John Carter McKnight. M83’s Midnight City is playing, and fades out as Joel Garreau introduces the conference. The perfect song to start the day; if Midnight City doesn’t get you pumped up, you may in fact be clinically dead.

M83 | Midnight City from DIVISION on Vimeo.

First up is ASU President Michael Crow. He opens with a simple question, “Are you happy with the trajectory of our country?” *Crickets*. Out of 500+ scientists, artists, designers, futurists, and civilians, not a single person is happy or optimistic. Crow explains his philosophy: we are trapped by ossified bureaucracies, and particularly our institutions of knowledge production have become routinized and solidified around disciplinary silos. The Generic State University is full of uninterested students not learning from boring professors.
ASU aims to fix that, finding emergent ways of organizing genius. Since Crow’s arrival, he’s shattered departments and reorganized them around knowledge enterprises (the Biodesign Institute, the Consortium for Science, Policy and Outcomes, Games for Learning) and big questions like the Origins of Everything or Sustainability. The idea is to dump ‘valueless engagement’ and re-center Exploration as the core value of the university. “The only way to discover where we want to go is to intensely imagine.”
Michael Crow is a polarizing figure, and from my position as somebody who’s at ASU very much because of what he is trying to do, I think that you have to give his reforms mixed reviews. There are lots of corpses of interdisciplinary collaborations littered across the campus, and there are still plenty of uninterested students and boring professors. On the other hand, he has attracted a solid core of really amazing scholars, and at least he’s trying to engage with the future of higher education, rather than just aiming to maintain his stats. I am continually astounded that somebody gave Michael Crow a major university, but I would also follow him to the gates of hell.

Next, Neal Stephenson. Neal is chairing the first panel, but he offers his thoughts on visioning the future, although not before first noting that “I hope I’m not old, ossified, etc. I don’t want Crow to dynamite me and terraform the rubble,” a line which gets major laughs. Being dynamited by Crow will be theme throughout the day. For Neal, visioning implies an internally coherent picture, not just a random grab-bag of ideas. He writes fiction because it’s really cheap. The big problem that Neal is grappling with (and one that he, ASU, and myself are working on) are how to effectively transform imagination into innovation. “Somebody from 1900 would not understand 1968. Somebody from 1968 would get 2012. Somewhere along the line, we lost the ability to effectively imagine and envision the future.” Of course, Neal is not a big fan of futurism as a practice, “Future is my new F-word.” But the man who brought us Snow Crash and Diamond Age is looking for the next big scientific breakthough.
I do have some doubts about Neal’s conception here. Isn’t the big change between 1900 and 1968 the rate of technological change, rather than any new technology (cars, airplanes, computers, rockets, nuclear power) in and of itself? Alvin Toffler talked about the problems surrounding rate of change in the classic Future Shock, but at this point, I think that the group of neotenic (change-seeking) individuals is large enough, organized enough, and influential enough that future shock isn’t what it once was. For some people, even The Singularity wouldn’t be a surprise.

Follow Neal is Stewart Brand, gnomic member of the original Merry Pranksters and the environmental movement, and the inspiration behind the Whole Earth Catalog, the Blue Marble photo, and the Long Now Foundation. Brand uses his perspective of over 40 years as an environmentalist to speak out against “Earth National Park” and the idea that any interference by man in nature is a violation. For him, a survival future involves a gardening mentality, and science helps with that. Demographically, the future is new cities full of young people in the Global South, and they won’t much care what old white people in Brussels and Washington DC have to say to them. Brand might be living proof of SMBC’s Law of Futurology (his current project is bringing back the passenger pigeon), but he is as always an engaging and controversial speaker.

Corner Convenience from hellofosta on Vimeo.

Corner Convenience is probably the stand-out product of the workshops, as an attempt to imagine the everyday materiality of new foods, forms of entertainment, and ways of paying for things in the very humdrum location of the convenience store. Two words: Panda Jerky.

Sherry Turkle is next. She’s the author of Alone Together: Why We Expect More from Technology and Less from Each Other, and an MIT professor, but for all her standing in the digital humanities, she holds a strong conviction that virtuality damages something important about our humanity, and that we are replacing complexity with technological oversimplification. She’s an elegant speaker, but not a very good presenter, and my final thought was that Turkle is a digital bioconservative, the equivalent of Leon Kass who is disgusted by social change, cannot explain why, and so elevates disgust to a moral principle.

At this moment, it turns out that not only am I sitting next to John Carter, but @buildcyberworld (she of the Brad Allenby==Cave Johnson quote) is right behind me. Live tweeting events is weird.

Bruce Mau, the next presenter, knocked it out of the park. Bruce is a true design guru. He’s the force behind ASU’s web design, which is ahead of 90% of most university web design (think I’m kidding? Check out the rest and report back), a 1000-year plan for Mecca, and fixing the future in general. It’s hard to pin down Bruce Mau, but he is highly quotable.

And closing the day is Bruce Sterling. Bruce remarks that “The telepathic monkey is weirdly melancholic. Science-fiction has been doing telepathic monkeys for so long that to see one in the flesh is a little dull. Nobody’s everyday life is weird and wonderous.

But technology is provisional, and wonder is a beautiful frame of mind that should probably be reserved for the eternal and universal.

Bruce finishes by saying, “Summing up what’s happened here is impossible, but I can demo it.” And launches into a truly weird piece of performance art where he puts on his telepathic brain-reading gloves (bought at the Corner Convenience store) and summons up an augmented reality interface to 3D print up some improvements to his house, and finally help him learn Spanish through old Mexican comedies.

“I connected to a human moment, I understood the joke. Learning in context is the victory condition.” I cannot disagree with Bruce here.

I took a short break for dinner, and to tour some of the art exhibits with Marci, including RC helicopter minigolf and an Intel exhibit on steampunk superheroes, followed by the Immerge Light/Music/Art festival.

Immerge was a truly weird event, a collection of digital art installations set on a highly abstracted concrete plaza by the ASU art museum. Waiting for it to start, I amused myself playing with an interactive video/music display hacked together out of a Kinect sensor and an iMac. Playing this instrument with no keys or strings or tactical feedback was really strange. You danced like a maniac in an attempt to elicit music. It was like the humans were entertaining the robots.

At a few minutes past 7:00, thunder rolled, and a projected waterfall rolled down the side of the music hall. Fractal trees grew on pink stucco walls, and the edges of the ASU art museum were picked out in lights. Dancers costumed in electro-luminescent body-suits, and armed with Fresnel lenses and half-globes that displayed strange images (iPads in a handheld casing) moved through the crowd, scanning trees and onlookers. It was like a visitation from some post-human Phoenix, a lush jungle city of beautiful glowing scientists collecting strange aesthetic data on the past.

With the conference a week gone, can I answer the question “what was EMERGE?” It was tons of fun, it was provocative, it was the kind of thing that could only happen at ASU. Did we make the future? Probably not, but hopefully a few more minds were engaged with the future, and a few durable ideas will come out of it. EMERGE was an oasis of optimism and creativity in a desert of bleak short-sightedness.

As Joel Garreau said at the start of the day, “The difference is that at ASU, scientists and arm wavers drink with each other.”


EMERGE Impressions Day 1

I just got out of EMERGE, a design futures event put on by ASU that brought together artists, scientists, writers, hackers, designers, futurists and other maniacs to reflect on what kind of future(s) we want to live in, and then over the course of three days, try and make those futures by any and all available means. I was an ethnographer for that event, which means that it's my job to translate the ephemeral lived experience of attending EMERGE into recorded data. This is my first draft.

Fischerspooner - Emerge from Jon Kane on Vimeo.

0900 3-1-2012
I'm sitting in the cavernous Stauffer Flex Space, balancing a large black notebook, iPad, and coffee. Our rows of black chairs are dwarfed by the height of the hall, strange and hulking objects are shrouded in shadows along the walls. The conference organizers mingle in the space between the front row and the minimalist dais. I can see Bruce Sterling. The attendees are settling into their seats; My fingers poised in anticipation, ready to take notes and live tweet the event.

Cynthia Selin gets up and introduces the conference. Our goal is foresight through design and story-telling to achieve a more sustainable and equitable future. "Stop being a passive consumer of technology and make the future."

13 ASU researchers take the stage to present their research. Topics include: sustainability and interest in K-8 education; biological computer chips that scan antibodies in the blood to diagnosis a full spectrum of diseases; algae into oil, plastic, and everything else; telekinetic cyborg-monkeys; DARPA's transhuman super-soldier program; video games that enable socially transformative and empowering play; social networks that reflexively aim to minimize ecological impact; democratically governing technology; and sensor networks that autonomously seek meaningful knowledge. Bruce Sterling pronounces the morning "The weirdest set of presentations I've ever seen."

1210 3-1-2012
Bruce Sterling takes the stage. After a brief introduction on multidisciplinary, and how scientists and artists talk past each other, asking "What is here that I can use/be entertained by?" rather than "What is actually going on here?" He launches into what he thinks about design fiction. Design fiction is a diagetic prototype. It's a way to use our love of gadgets and our ability to discuss objects/services to move avoid ideological debates. Design fiction is a hack to avoid political paralysis.

Most objects in history have been imaginary, but in the past only elites like Big Auto and AT&T could really do exploratory prototyping like concept cars or Disney EPCOT. In a networked society, prototypes are accessible to anybody, they are public. Design fictions crystallize techno-social potentials by showing them in a human context.

Sterling branches off to science-fiction for a moment. Writing scifi about a phenomenon classifies that phenomenon as scifi. The fact that we think of it as scifi is burdenson (like those telekinetic cyborg monkeys, the DARPA super-soldiers, and the practice of what actually happens in those labs). The big question in science-fiction is now "Does the girl kiss the vampire?" because Paranormal Romance sells book. The demands of the publishing industry have pulled the genre's teeth.

Back to art, design, and science. All our creative disciplines use the same hardware now. Boundaries are corroded. We're spread all over the landscape. Everything is awesome, nothing is interesting, but we can prevail. The idea is to turn speculation into coherent traces, to make it approachable. EMERGE is something that no scifi movie could make happen.

Maria Bezaitis takes the stage. She's a senior ethnographer at Intel. Intel actually has quite a lot of ethnographers because they want to know how people use their chips so that they can keep selling them over deep time. But understanding people isn't enough, because the big actors are all hybrids now ((shades of Latour's monsters, cyborgs, and post-humans)). Corporations have invented people, and they are ugly. We need to figure out what our point-of-view on people is, we need to make it explicit, so that we can get something beyond the standard Facebook/Google/Gamer/User.

According to Maria, Big Data is going to dominate the future. Digitization is well into the process of disappearing our things into the cloud. Digital objects are mobile narrative devices. We all make things now, the digital traces that are left behind whenever we interact with a computer, but we do not yet know who owns these traces. The entrepreneurs of the future will need this data, it is the core input to their economy. Thinking in terms of data privacy or piracy is wrong; it boxes up data and perpetuates monopolies. We need to move forward.

I agree with Maria entirely about the future of information. It's just that I don't know anybody who is ready for a world where our digital traces have become an autonomous persona that is essentially beyond our grasp, because as the product of many interactions and systems, it is too rich and full to be deleted, even by us. We like the State/Police having a monopoly on privacy, because mostly they use that monopoly responsibly, and when they don't the ACLU knows where to find them. Corporations like monopolies on their IP because it allows them to make money and stay in business. If both of these concepts are obsolete, we are in for a fundamentally strange and terrifying future.

"Prototypes are very disruptive because they are easily appropriable."
— Maria Bezaitis

1500 3-1-2012
Brian David Johnson, the leader of the workshop on Science Fiction Prototyping is late to the conference because he is arriving from Seoul via Portland. I have voluenteered to pick him up since A) I have a fast car, B) I want to interview him before the workshop, C) I cannot very well ethnograph a workshop that is not taking place. So while the rest of the group is touring an Intel exhibit on steampunk futures, I am navigating the Ballardian labyrinth of the Sky Harbor access roads, weaving and forth between the monumental plinths of the terminals and horrid American sedans and SUVs driven by semi-senile senior citizens. I tell Johnson as much on the phone as he retrieves his luggage. "That Ballard reference--this is going to be the start of an interesting friendship." Indeed.

The workshop and day 3 of the conference deserve their own posts, so I will not include them here. Let it just be known that they were awesome, and I now need to finish my story about a neuropharmaceutical hacker and an investigator from the CDC trying to reach some understanding of trust and the public good in a world where research has outpaced regulation.

And now to sleep, perchance to dream of electric sheep.