Sorry, your browser does not support canvas.
Sure, why not.
(If nothing loads, no worries, this requires JavaScript and a recent web-browser)
Edit: move your cursor over the square, it reacts.


The Right to Be Forgotten

The recent revelations of a set of massive and longstanding NSA surveillance programs have prompted blizzard of accusations, defenses, and recriminations from across the political spectrum, PRISM and related programs have been called everything from “all the infrastructure a tyrant could need” to a vital component of national security, while leaker Edward Snowden has been called everything from a hero to a traitor. A week later, the pieces have yet to settle. But what’s been bothering me in all this is the confusion of ideas about state power, civil liberties, surveillance, computer security, and privacy. It bothers me, because without clear ideas we cannot have clear policies, and without clear policies liberty, security, and any other desired public good are achieved by accident.

PRISM is not strictly speaking surveillance.  It looks like surveillance, it feels like surveillance, but it lacks the main purpose of surveillance: creating a disciplinary power relationship.  When scholars talk about surveillance in a rigorous sense, they’re mostly talking about Foucault’s theory of the Panopticon. The original Panopticon was a plan for a prison, with the cells arranged so that a single guard could watch all the prisoners, and dispatch punishments and rewards as appropriate to the prisoner’s behavior.  Eventually, according to Foucault, the desires of the warden would be internalized by the prisoners, and they would behave as planned. They would be disciplined.

Foucault’s genius was noting that the architecture of the Panopticon allowed the mechanisms of power to operate at very little cost, because prisoners could not tell when they were under observation, and so would always have to behave as if they were being watched. Additionally, panopticonic structures were everywhere: in classrooms, hospitals, urban renewal of medieval districts into broad boulevards, even the bureaucratic organization of the modern state into administrative districts and statistical agencies, to the point that a scholar describing yet another panopticon is met with a sigh and shrug.

As Whitney Boesel of Cyborgology noted, when we look for a disciplinary purpose in these NSA programs, we find nothing. Despite editorials in the New York Times and onThinkProgress with a Foucault 101 explanation of the panoptiocn, and gestures towards the chilling effect and future potential harms, it’s difficult to point to any specific thought or speech act that someone did not have as a consequence of the potential that they might be added to an NSA database. People appear to be totally free to say and think whatever they want online, including espousing flatly anti-democratic opinions across the political spectrum. These programs are no more panopticonic than 20thcentury statecraft in general.

Privacy has a totemic value in American political discourse, but privacy as a concept is fuzzy at best. Philosophically, my colleague Jathan Sadowski describes privacy as “that which allows authentic personal growth,” a kind of antithesis to the disciplining and shaping of the panopticon. Legally, American privacy originates in a penumbra of rights defined in the 4th Amendment (protection from arbitrary search and seizure), 9thAmendment (other, unspecified rights), and 14th Amendment (right to due process). Privacy has further become established as part of the justification for reproductive freedom in Griswold v Connecticut and Roe v Wade, loading it with all the baggage of the culture war.

But what is privacy, really? Future Supreme Court Justice Louis Brandeis, in an influential 1890 essay, described it as “the right to be left alone.” Brandeis’s essay was published in the context of an intrusive popular press using the then-new technology of instant photography to violate the privacy of New York society members. Brandeis extended the basic right of a person to limit the expression of their “thoughts, sentiments, and emotions” into a fundamental divide of public and private spaces. Since then, mainstream legal thought has attempted to apply Brandeis’s theory of privacy to new technology and new concerns, with varying degrees of success.

Brandeis metaphor breaks down in the face of Big Data because Brandeis was concerned with the gradations of privacy in space (it is acceptable to be photographed on the red carpet at a premier, unacceptable on your doorstep, totally illegal inside your home), and computers and data are profoundly non-spatial. There is no “Cyberspace.” That’s an idea cribbed from a science-fiction book written by a man who’d never seen a computer. Spatial metaphors fundamentally fail to capture what computers are doing. Computers are, mathematically speaking, devices that turn numbers into other numbers according to certain rules. These days, we use computers for lots of things: science, entertainment, but mostly accounting and communication. And for the latter two uses, the phrase “my personal data” (which inspires so much angst) confuses personal to mean both “about a person” and “belonging to a person.”

Advocates of strict privacy control tend to confuse the two. Privacy is contextual, social, and promethean, so I’d like to analyze something concrete instead: secrets.  A secret is something that a person or a small group knows, which they do not want other people to know. Most “personal data” is actually part of a transaction, whether you’re buying a stick of gum at the gas station or looking at pictures stored on a remote server. We’re free to keep records of your side of the transaction, yet we’re outraged when the other side keeps records as well. We could ask the other side of the transaction to delete the records, or not share them, but at its strongest this is a normative approach. There’s no force behind it.

Moving from the normative ‘ought’ to ‘is’ requires a technological fix. Physical privacy is important, but walls and screens are far more sure than the averted gaze. It’s wrong to steal, and valuable things are locked up.  The digital equivalent to walls and locks is cryptography—math that makes it difficult to access a file or a computer system. Modern crypto is technically speaking, very very good. RSA-256 is unbreakable in the lifespan of the universe, assuming it’s correctly used. The problem with cryptography is that it’s very rarely used according the directions. People use and reuse weak passwords, they leave themselves logged in on laptops which get left in taxis, or they plug in corrupted USB keys, compromising entire networks.

There is a very real chance that there is no such thing as digital privacy or security; that Stewart Brand’s slogan that “information wants to be free” is true in the same way that “nature abhors a vacuum.” The basic architectures of computers, x86 and TCP/IP, are decades old and inherently insecure in the way that they execute code. Cloud services are even worse. We as users don’t own those servers, we don’t even rent them. We borrow them. Google and Facebook aren’t letting us use their services out of the goodness of their hearts, and that data that we enter (personal data in both senses) is the source of their market power. Sure, there are cryptographically secure alternatives (DuckDuckGoHushmail and Diaspora come immediately to mind), but their features are lacking and relatively few people use them. Crypto is both hard, and runs directly against the business model of major internet companies.  The best way to keep a secret is not to tell anybody, and if you have a real secret, I’d strongly advise you to never tell a computer.

Practically, not even the director of the CIA follows that advice. Unless you’re Amish, you have to tell computers things all the time, which leads to the problem of what the government can do and should not do with all that data. I personally don’t like the “if you’ve done nothing wrong, you have nothing to fear” arguments advanced by advocates of the security state, because the historical record shows plenty of good reasons to distrust American intelligence agencies, ranging from mere incompetence (missing the imminent collapse of the Berlin Wall, Iraq’s non-existent WMDs, the Arab Spring) to outright criminality (CIA backed assassinations in the 60s and 70s, COINTELRPO, Iran-Contra), but this wasn’t some kind of rogue operation: data was collected according to the PATRIOT act, overseen by FISA judges, and Congressional was informed. Certainly it was according to the letter of the law rather than the spirit, but it happened within the democratically elected, bipartisan mechanisms of government just the same. It’s hard to deny that many voters were willing to make that trade of liberty against security.

The dream of counter-terror experts everywhere is some kind of perfect prediction machine, some kind of device which could sift through masses of data and isolate the unique signature of a terrorist plot before it materializes. This is a fantasy. Signals intelligence and social network analysis is immensely useful for mapping a known entity and determining its intentions, but picking ‘lone wolves’ out of a mass of civilians is a different beast entirely. Likewise, data mining can do great work on large and detailed datasets, but since 2001 there have been only a handful of terrorist attacks in America and Europe (local insurgencies have very different objectives and behaviors). There is no signature of a immanent terrorist attack. Realistically, what these systems can do is very rapidly and precisely reconstruct the past, making the history of an event legible to determine the extent of an attack and hunt down co-conspirators.

What’s happening isn’t really surveillance; the millions of people buying 1984 are reading the wrong part of the book. Orwell’s Party is terrifying not because of the the torture chambers in the Ministry of Love, but because it can say “We have always been at war with Eurasia, and the chocolate ration has been increased to 2 oz” (when last month it was 3 oz), and what The Party says is true. Rewriting history is dangerous for nations, but as Daniel Solove has eloquently pointed out, for individuals the proper literary comparison isn’t 1984, its Kafka’s The Trial, where the protagonist is bounced powerlessly and senselessly through an immense bureaucracy.

The political problems of these programs and Big Data are not the same as the problems of secret prisons, torture chambers, and non-judicial executions, though all those things are very real and very dangerous to civil liberties. The more common assaults are the unnecessary audit, the line at the airport, the job application rejected because of a bad credit score, and the utter lack of recourse that we as citizens have against these abuses by many large scale organization, including corporations and governments.

We could mandate laws to force basic changes in how computers work and how data is collected, such as deleting everything as soon as it comes in or packing databases with random chaff.  Purely legal solutions to technological problems are almost never effective, and usually add another layer of complexity to the existing mess. As any security expert will tell you, security through obscurity is no security at all, and anonymous data is far from anonymous. Giving up and living as suspects under glass, fugitives in our own lives, is equally unappealing.

The alternative is recognizing that in a world of omnipresent computation, leaving traces behind is inevitable, but that rather than the uncertain shield of privacy, we can wield a sword of truth. To ask for privacy is to ask to be forgotten, something both impossible and generally undesirable. We should have the right to set the record straight, to demand to know what is known about us as individuals and as a population, and to appeal what are currently non-judicial and unaccountable actions. Brandeis’s right to be left alone is not the right to disappear, but rather to demand that those who would try and harass us reveal themselves and defend their actions.

This essay originally published at As We Now Think


Google's Mad Scientist Island

Google's I/O 2013 conference was yesterday, and the tech journalistic consensus is trickling in. In terms of specific product launches, there's nothing with the "wow" factor of last year's Google Glass, but according Lance Ulhoff , "Google’s worldview is finally coming into focus. The tenuous threads that connect these dozens of different applications and services are strengthening and gradually being pulled closer together. Underneath it all is Google’s vast web of information and smarts, which is all about us." Google's products are getting sleeker, more graceful, less skeumorphic. I might even go so far as to say 'intuitive and emotional'. (A skeptic might say 'intrusive and creepy')

The highlights were in the post-keynote by Larry Page.

 First, "We should be building great things that don't exist." This is a pretty cool sentiment, particularly from a company like Google, which combines massive size with a desire to innovate. I'm reminded a little of classic era Bell Labs and Xerox PARC, where a steady cashflow from telecoms/office machines went to support radical ideas in electronics and personal computing. Google's search/advertising business gets us wearable computers and self-driving cars.

Second, Larry Page wants earth to have a mad scientist island . This is the !!! moment of the conference, and honestly, I'm not sure what to think, which is why I want to run this by you guys.

Personally, I agree with Page that research is slowed by laws and regulations, but the effect is probably not as big as he thinks. What really slows research down is our species innate conservatism. On the business side, this is exemplified by demands from Accounting and Marketing that the new product be profitable and interoperable with older versions of the system back to 19xx. On the academic side, it's the publish-or-perish paradigm, which has researchers focused on "do-able" projects as opposed to "needs-to-be-done" projects.

It'd be nice to start with a clean slate, without the pressure to make everything make work with existing systems, conform to building codes, or have to make money or sense this year. But I think that such a place, if it existed, would need oversight. The planet would be rightfully concerned if Mad Scientist Island started dumping toxins into the environment , or systemically violating human rights . Independent research enclaves could be a great idea-if they could be inspected without destroying their unique culture.


The Transhumanist Program

Hava Tirosh-Samuelson’s recent volume on transhumanism opens with the statement of purpose “This anthology takes transhumanism seriously not because it is a significant social movement, which it is not, but because the transhumanist vision compels us to think about ourselves in light of current technological and scientific advances and to reflect on the society in which we wish to live.” I disagreed with many of the essays in the books, but not with this statement.  Transhumanism is, based on my participation in and observation of its organizations over the past five years or so, definitely a fringe vision, but Francis Fukuyama called it “the most dangerous idea of the 21st century” for a good reason.   ‘Dangerous’ is a unwarranted judgment, but what is it about transhumanism that merits further reflection?*

On the surface, transhumanism is just another glossy scifi future. The basic idea is that through science and technology, we can take control of our biological destinies, both as individuals and as a species, and engineer away such inconvenient facets of human existence like disease, unhappiness, stupidity, aging, and death. In novels, and in more serious works of futurism like Eric Drexler’s Engines of Creation and the 2002 National Nanotechnology blueprint, this image of the future is one where Silicon Valley gadgetry meets biomedicine to put humanity on a perennial upgrade path guided by turtle-necked tech gurus with enthusiastic product launches. This view of transhumanism is the most common one, and is easy to mock, but underneath this shallow gee-whiz techno-utopianism, transhumanism poses a radical answer to a very old question: “What is our place in the universe?”

The oldest and most universal answer to this question of existence is divine creation: some greater force made the universe, put humanity on this Earth, and imbued us with special purpose. All faiths emphasis the importance of Obedience, Transcendence, and Redemption: Obedience to moral laws of divine origin, the possibility of personal Transcendence from a mundane world of suffering to a divine world of perfection in this lifetime, and the future Redemption of the entire world to state of perfection when the divine will is finally enacted. The combination of these elements and their exact details vary significantly across faiths, but their divine origins and importance to everyday life are the foundations of all religion.

The problems with divine solutions to this question of existence are twofold. First, there is no “Universal Religion”, no single true divine law on which all humans agree. There are multiple religions, which differ not just on minor points of the revealed word of God, but on basic theological issues. Attempts to convert others to the ‘true faith’ have caused bloody wars and left syncretistic intrusions of older myths in the new faith.

Moreover, divine revelation is one of the most disruptive and dangerous forces in history. I am an atheist, and while I personally don’t believe in god, I recognize that people can feel a very strong connection to some higher power. Institutionalize churches and theological governments, with their legitimacy based in both interpretation of the arcana of past revelations and worldly political power, are threatened whenever people can start to directly experience the divine.  Look at difficulties of the Catholic Church in containing evangelical movements like that of St. Francis of Assiz, or the more contemporary problems of the Mormon faith transitioning from the personal revelation of Joseph Smith to an institution lead by Brigham Young and his successors. Religion can rule, but it loses its sacred power in the mess of politics.

With the decline of supreme religious power, codified in the west in the 1648 Peace of Westphalia, which reestablished freedom of (Christian) worship as a European right, people began looking for answers to this existential question outside the structure of divine revelation. Enlightenment humanism established the idea of the rational individual engaged in reasoned discourse to create progress in a naturalistic universe as a new foundation of order. Human beings were uniquely endowed with intellectual faculties: primarily logic, language, and empathy; which could be used to engage with others across space and time to discover and clarify moral laws and progress towards a state of perfection. The universe was taken as an external fact, objective and the same for all observers, the nature of which could be understood through empirical inquiry.

Humanism is at the center of Western philosophy, but it has taken quite a beating in the 20th century. Major ideologies like Fascism and Communism were flatly anti-humanist, dealing in masses rather than individuals. Disciplines such as economics and ecology replace the individual with larger abstractions, like the market or the environment. Academic elites used post-modernism, post-colonialism, and feminism to critique the humanist tradition as arbitrary and exclusive, gutting it from within. The horrors of the Holocaust and the world-ending threat of the atom bomb made the idea of the perfection of human wisdom through intellectual achievement laughably obsolete. And on a gut level, the past 150 years of rapid technological change have orchestrated greater changes in the human condition than the previous 1,500 years, or possibly even the previous 15,000 years. Marx was right: All that was solid has melted into air. We post-moderns feel profoundly disconnected from the humanist tradition.

The transhumanist program is based on an  idea of human beings as an evolved biological system, with a lineage that can be traced back billions of years to the first self-replicating bits of RNA, and then onto simple cells, multicellular organisms, and so on. Modern humans are unique among the animals because we coexist with a second evolved system, which are broadly speaking encompassed by the categories of culture and technology.

What distinguishes transhumanism from a naturalistic reading of history is the transhumanist teleology. Transhumanists see humanity merging with its tools, becoming a cybernetic species—one capable of regulating its individual bodies, and its collective environment. The classical cyborg was an idealized astronaut, designed for exploring the cosmos, and the transhuman goal is the expansion of human-descended beings through time and space. Mere biology and a single planet is too frail to guarantee our survival. Avoiding extinction over deep times means we need to turn our intelligence to the problems of existence, durability, and change.

There is a lot that I take issue with within the transhumanist program. Their theory of evolution is shallow and based more on hearsay than any kind of actual science. On an individual level, the disposable gadget orientation towards biology doesn’t have much relevance to real bodies, which are stubborn and recalcitrant things. On a larger scale, if transhumanists are serious about embarking on a project of radical evolution, they’ll need to engage and win over a skeptical public.  Currently, “responsible” policy-makers have set themselves up the direct antithesis of transhumanists, defending an intrinsic human nature from gene-hackers and insane AIs, among other existential threats.

But for all the flaws of transhumanism, the reason why I call myself a transhumanist is that it is the only ideology which is attempting to grapple seriously with the problems of our future as a technological species. We’ve already altered our planet; the new word in conservation is the anthropocene. The wonders of modern medicine have increased healthcare costs more than they’ve extended life; we need to move beyond curing death one organ at a time towards a holistic rejuvenation approach. The weakening of social cohesion, widespread increases in psychological instability, and the inexplicable nature of contemporary violence, can be laid on the rise of greed as the only universal value. Money is a useful tool, but there must be other ways to find value and purpose in the world. And finally we need to begin looking seriously at the fragility of our technological networks, and ways in which they can be made more resilient. These problems are wicked; fraught with irreconcilable conflicts over values and the basic terms of the debate, but I believe that debate and action are necessary.

Transhumanists may be committing the crime of hubris, but hubris is better than willful irresponsibility. The overwhelming public dissatisfaction that the status quo is breaking down can only be met with new ways of seeing, thinking, and being in the world.

*This essay is the companion to January’s Three Faces of Transhumanism


Generating Vivid Geometric Hallucinations using Flicker Phosphenes with the “Neurolyzer Table”

SaikoLED, a Cambridge MA based open-source and open-hardware lighting company, modified their "Neurolyzer" display table prototype to induce flicker phosphene geometric visual hallucinations. I contributed a brief writeup on one of the neurological theories of how such hallucinations arise, which is included in their post (draft mirrored on dropbox). Shown to the left is their Neurolyzer display table being used with some of Nervous System's Hyphae and Radiolaria designs.


Design note: "Charlieplexing" LED matrices: Pin savings of Charlieplexing, easy assembly of multiplexed LED modules.

( dropbox mirror )

Charlieplexing ( named after Charlie Allen ) is a great way to save pins on a microcontroller. Since each pin can be either high, low, or off ( 'high impedence' ), and LEDs conduct in only one direction, one can place two LEDs for for each unique pair of microcontroller pins. This also works with other devices, like buttons, but you can only place one for each pair of pins since they conduct in both directions*. However, a recent foray into designing with Charlieplexing revealed its major drawback to me: soldering a zillion discrete LEDs is very time consuming and not for everyone. It is easier to use LED modules, which have LEDs already wired up, and are designed to be driven by multiplexing. For an N by M multiplexed grid you need N+M driving pins. However, for an N by M charlieplexed grid you need only K pins where K(K-1)=NM **. However, there is often a way to Charlieplex LED matrices to save pins without increasing assembly difficulty.

Thinking about charlieplexed grids

One might ask: how the hell am I supposed to keep track of all the possible combinations used in Charlieplexing? Since each pin can be either high (positive, or anode) or low (negative, or cathode), we can draw a K by K grid for K pins, where the cases where a pin acts as an anode are on one axis, and as a cathode, the other. Along the diagonal you have sites where a pin is supposed to act as both an anode and a cathode -- these are forbidden, and are blacked out. Here is an example grid for 16 pins:

Placing modules

I can now place components on this grid to fill it up. Say I have an 8x8 LED matrix with 8 cathodes and 8 anodes. All I have to do is find an 8x8 space large enough to hold it somewhere on the grid. For example, two 8x8 LED matrices fit into this 16 grid:

Another common size for LED matrices is 5x7. We can fit two of them on 12 pins like so:

Now it gets fun. It's ok for components to wrap around the sides. We can fit four 5x7 ( for a 10x14 pixel game board perhaps? ) matrices on 16 pins like this:

We can fit six 5x7 matrices on 18 pins ( for a 10x21 pixel game board perhaps? Large enough for original tetris! ). Eight 5x7 matrices fit on 20 pins. 8x8 matrices are a little more clunky, but you can still fit 3 of them onto 20 pins or 4 of them onto 22 pins ( 22 pins also fits 10 5x7 arrays ). We leave these last three as exercizes. ( solutions 1 2 3 4)

To demonstrate that this approach does in fact work, I rigged up a little game of life on four 8x8 modules running on 22 pins on an AtMega328. After correcting for a problem with the brightness related to the PORTC pins sourcing less current, the display is quite functional -- the scanning is not visible and all lights are of equal brightness. I scan the lights one at a time, but only spend time on those that are on. (The variable frame rate is from the video processing -- the actual device is quite smooth)

Other packaged LED modules can be laid out similarly. 7 segment displays ( 8 with decmal point ) come packaged in "common cathode" and "common anode" configurations, which would be represented as a column of 8 cells, or a row of 8 cells, respectively. Often, four 7-segment displays ( 8 with decimal ) are packaged at once in a multiplexed manner -- these would be represented as a 4x8 or 8x4 block on our grid, depending on whether they were common anode or cathode. RGB LEDs also come packaged in common cathode and anode configurations. For example, here is how one could charlieplex 14 common anode RGB LEDs on 7 pins:

Hardware note: don't blow it up

When driving LEDs with multiplexing or charlieplexing, it is not uncommon to omit current limiting resistors. Since the grid is scanned, only a few LEDs will be on at once, and all LEDs spend most of their time off. If the supply voltage lies between the typical forward voltage, and the peak instantaneous voltage, we can figure out the largest acceptable duty cycle and enforce this in software. However, now one must ensure that software glitches do not cause the array scanning to stall, or that LEDs can survive a period of elevated forward voltage.

Microcontrollers will have a maximum safe current per IO pin. Sometimes, you can rely on the microcontroller to limit current to this level. Other times, attempting to force more than the maximum rating through a pin will damage the microcontroller. You can ensure that this never happens in software by never turning on more LEDs than a single IO pin can handle. Or, you can use tri-state drivers. If your microcontroller limits over-current, you can probably turn on as many LEDs as you want at once, but they will dim exponentionally with the reduction in current per LED.

Combining devices

There is nothing stopping us from combining different types of LED modules, or LEDs and buttons, in our grid. However, buttons conduct in both the forward and backward direction, so they occupy both the anode-cathode and cathode-anode positions for any pair of pins. I represent this as a black and white pair of buttons in the grid drawing. For example, one could get an acceptable calculator with 6 display digitis and 21 buttons onto 10 pins if you use a mix of common-cathode and common-anode 7-segment displays like so:

You could probably get a pretty decent mini-game using the space left over from Charlie-muxing four 5x7 modules on 16 pins. There is enough room to fit 17 buttons and 6 7-segment displays (shown as earth-tone strips below):

For the grand finali, we revisit the six 5x7 modules on 18 pins. Apart from giving us a grid large enough to hold classic Tetris, we also have room for 18 buttons, 6 7-segment displays (shown as earth-tone strips below), with 12 single-LED spots left over -- all on 18 pins. On an AtMega, this would leave 5 IO pins free -- enough room to fit a serial interface, piezo speaker, and crystal oscillator. Programming, however, would be a challenge.***

Hardware note: combining different LED colors in one grid

There are problems with combining different LEDs in one grid. If two LEDs with different forward voltages are placed on the same, say, cathode, then the one with the lower forward voltage can hog all the current, and the other LED won't light. I have found that ensuring in software that LEDs with mixed forward voltages are never illuminated simultaneously solves this problem.

Also ensure that your largest forward voltage is smaller than twice the lowest forward voltage. For example, if you try to drive a 3.6V white LED in a matrix that contains 1.8V red LEDs, the current may decide take a shortcut through two red LEDs rather than the white LED. However, it may be possible to ensure that there are no such paths by design. You must ensure that for every 3.6V forward path from pin A to B, there are no two 1.8V forward paths AC and CB for any C.

Driving software

Saving microcontroller pins and soldering time in well and good, but programming for these grids can be a real challenge! Here are some practices ( for AVR ) that I have found useful.
  • Overclock the processor. Most AVRs are configured to 1MHz by default, but can run up to 8MHz even without an external crystal. The AVR fuse calculator is a godsend. Test the program first without overclocking, then raise the clock rate. Ensure that the power supply voltage is high enough for the selected clok rate. If things get dire and you need more speed, you can tweak the OSCCAL register as well.
  • Prototype driver code on a device that can be removed and replaced if necessary. Repeated *ucking with the fuses to tweak the clock risks bricking the AVR. It's a shame when you have bricked AVR soldered in a TQFP package.
  • Row-scan the grid. If this places too much current on the IO pins, break each row into smaller pieces that are safe. If too many LEDs are lit on a row and they appear dim, adjust the time the row is displayed to compensate.
  • Store the LED state vector in the format that you will use to scan. Write set() and get() methods to access and manipulate this state that maps the structure of the charlipelxing grid onto the logical structure of your displays. Scanning code is hard enough to get fast and correct without worrying about the abstract logical arrangement of the LED grid.
  • Use a single timer interrupt to do all of the scanning. Having multiple recurring timer interrupts along with a main loop can create interesting interference and beat effects in the LED matrix that are hard to debug.
  • If there are buttons and LEDs on the same grid, switch to polling the buttons every so often at a fixed interval, and write there state into volatile memory that other threads can query.
  • If your display is sparse ( e.g. a game of life ) you can skip sections that aren't illuminated to get a higher effective refresh rate. If your display is very sparse, and you have a lot of memory to spare, you can even scan LEDs one at a time.


This document outlines how to drive many LED modules from a limited number of microcontroller pins. The savings in part cost and assembly time are offset by increased code complexity. These design practices would be useful for somone who enjoys coding puzzles, or gets a kick out of making microcontrollers do way more than they are supposed to. They could also be useful for reducing part costs and assembly time for mass produced devices, where the additional time in driver development is offset by the savings in production. I originally worked through these notes when considering how to bulid easy-to-assemble LED marquee kits, but as I have no means to produce such kits, nor easy mechanism for selling them, I am leaving the notes up here for general benefit.

Also... Charrliee.

*If you place a diode in series with a button you can place two buttons for each unique pair of pins. One can make this diode a LED to create a button-press indicator.

**For those interested, K*(K-1)=N*M solves to K = ceil[ ( 1 + sqrt( 1+4NM ) ) / 2]

***I've tried this with an 8x16 grid using a maximally overclocked AtMega. It is tricky. To avoid beat effects, sound, display, and button polling are handeled with the same timer interrupt. The music is intentionally restricted to notes that can be rendered with low clock resolution. Some day i may even write this up.


Driving software example: Game of Life on a 16x16 grid

Due to popular demand I've gone through and commented the source code for the game of life demo. There are probably some things that I could have done better, but hopefully it will be a good place to start.
Game of Life charliplexing 4 8x8 LED Matrix demo.
Designed for AtMega

# to compile and upload using avr-gcc and avrdude:

# compile
avr-gcc -Os -mmcu=atmega328p ./display.c -o a.o
# grab the text and data sections and pack them into a binary
avr-objcopy -j .text -j .data -O binary a.o a.bin 
# check that the binary is small enough to fit!
du -b ./a.bin
# upload using the avr ISP MKII. In this case, 
# it is located at /dev/ttyUSB1, but you would change that argument
# to reflect whichever device your programmer has been mounted as
avrdude -c avrispmkII -p m328p -B20 -P /dev/ttyUSB1 -U flash:w:a.bin

Hardware notes

DDRx  : 1 = output, 0 = input
PORTx : output buffer
PINx  : digital input buffer ( writes set pullups )
         !RESET     PC6 -|  U  |- PC5
                    PD0 -|     |- PC4
                    PD1 -|     |- PC3
                    PD2 -|     |- PC2
                    PD3 -|  m  |- PC1
                    PD4 -|  *  |- PC0
                    VCC -|  8  |- GND
                    GND -|     |- AREF
                    PB6 -|     |- AVCC
                    PB7 -|     |- PB5   SCK  ( yellow )
                    PD5 -|     |- PB4   MISO ( green )
                    PD6 -|     |- PB3   MOSI ( blue )
                    PD7 -|     |- PB2
                    PB0 -|_____|- PB1

        Programmer pinout, 6 pin:
        6 MISO +-+  VCC 3
        5 SCK  + + MOSI 2 
        4 RST  +-+  GND 1
        Programmer pinout, 6 pin, linear:
        6 MISO +  
        5 SCK  + 
        4 RST  +  
        VCC 3  +
        MOSI 2 +
        GND 1  +

        Programmer pinout, 10 pin:
        3 vcc  +-+   MOSI 2
               + +    
               + +]  RST  4 
               + +   SCK  5 
        1 gnd  +-+   MISO 6
    PORT : write to here to set output
    DDR  : write to here to set IO. 1 for output.
    PIN  : digital input

thanks to http://brownsofa.org/blog/archives/215 for explaining timer interrupts

// avr io gives us common pin definitions, 
// and avr interrupt gives us definitions for setting interrupts. 
// ( we will use one timer interrupt to update the display )
#include <avr/io.h>
#include <avr/interrupt.h>

// Basic definitions. Our display is 16 pixels wide and 16 pixels high
// This is set in N
// There are 16*16=256 lights total, this is set in NN
// We also use NOP to control timing sometimes, we use an assembly wrapper
// reflected in the macro "NOP"
#define N 16
#define NN ((N)*(N))
#define NOP __asm__("nop\n\t")

// I use two different sets of buffers for display data. Part of this is 
// Laziness -- it does consume a lot of memory and wouldn't fly on say an
// AtMega8. But it makes the code a little easier to understand. 
// First, I store the display pixels in a "raster-life" format. Here, 
// Each rown of the display is stored in a sequence of integers, and 
// If a pixel is on, then the bit corresponding to that pixel is set to 1,
// Otherwise 0.
// These displays are used to run the game of life logic because it is 
// straightforward to read and write pixel data. 
// We use 16 bit integers because this makes more sense for a 16x16 array
// But this is an abstraction and 8 or 32 bit integers would work just as well.
// We use a double buffering strategy. So, we declare two buffers, and then
// also make two indirect pointers to these buffers. When we want to update
// the display state, we write into the "buff" pointer, while reading the
// previous game state from the "disp" pointer. Then, when we are ready to
// show the next frame, these pointers will be flipped.
#define BUFFLEN 16
uint16_t b0[BUFFLEN];
uint16_t b1[BUFFLEN];
uint16_t *buff=&b0[0];
uint16_t *disp=&b1[0];

// To actually scan the display, I use a sparse format. When we scan the lights
// we turn them on one at a time. This can make the display very dim if we
// have to turn on all of the lights. However, for something like the Game of
// Life, only a few lights are ever on at the same time. If we only worry 
// about scanning the lights that are on, then our display is much brighter.
// However, it is important that the code that scans the display is very
// fast, so that we can run it thousands of times per second so that there is
// no visible flicker to the human eye. So, we prepare a list of which lights
// are on ahead of time. Then, the display scanning code only has to refer to
// this list, which is rapid.
// We use a double buffering strategy here. The active list of lights that 
// are on is stored in the "lightList" variable. This is the one that the 
// display scanning code acually uses. When we want to prepare a new list,
// we write it into the lightBuff variable, then flip it once it is ready. 
// I am somewhat wasteful here and allocate enough space to store two lists
// that might contain all 256 LEDs. I suspect there are ways to use less 
// space by being clever, but I can't think of anything at the moment. 
uint8_t ll0[NN],ll1[NN];
volatile uint8_t *lightList = &ll0[0];
volatile uint8_t *lightBuff = &ll1[0];
volatile uint16_t lighted = 0;

// This is a simple random number generator. It is not very good, and will
// produce the same sequence of numbers each time the AtMega is turned on,
// but it will siffice for this demonstration.
uint32_t rng = 6719;
uint8_t rintn()
    rng = ((uint64_t)rng * 279470273UL) % 4294967291UL;
    return rng&0xff;

// To simplify changing the pin states of the AtMega, I group the three ports
// PORTB PORTC and PORTD into one logical port and then set all three with
// one function call. 
void setDDR(uint32_t ddr) 
    DDRB = ddr & 0xff;
    ddr >>= 8;
    DDRD = ddr & 0xff;
    ddr >>= 8;
    DDRC = ddr & 0xff;
void setPort(uint32_t pins)
    PORTB = pins & 0xff;
    pins >>= 8;
    PORTD = pins & 0xff;
    pins >>= 8;
    PORTC = pins & 0xff;

// Spin around NOP for a while -- a quick and lazy way to control timing
void delay(uint32_t n)  { while (n--) NOP;}

// Read and write functions for the display data. 
// This abstracts away the bit packing and unpacking.
// First argument: one of the display buffers ( either disp or buff )
// Second argument: the pixel index. We use row-major indexing, so 
// for example, the first row will count up indecies 0..15, then 
// the second row will start at index 16 .. 31 and so on and so forth.
// For the set method, the third argument is 0 or 1 -- 1 for "on" and 0 for
// "off"
uint8_t get(uint16_t *b,uint16_t i) { return (b[i>>4]>>(i&15))&1;}
uint8_t set(uint16_t *b,uint16_t i,uint8_t v) { if (get(b,i)!=v) b[i>>4]^=1<<(i&15);}

// The game of life wraps around at the edges. To abstract away some of the
// difficulty, these previous and next functions will return the previous
// or next row or column, automatically handlign wrapping around the edges.
// For example, the next column after column 15 is column 0. The previous
// row to row 0 is row 15.
uint8_t prev(uint8_t x) { return (x>0?x:N)-1; }
uint8_t next(uint8_t x) { return x<N-1?x+1:0; }

// These are parameters to configure the Game of Life. Sometimes the game
// gets stuck. To poke it, I randomly drop in some primative shapes. 
// I have stored these shapes in a bit-packed representation here.
// Also, the TIMEOUT variable tells us how long to let the game stay "stuck"
// before we try to add a new life-form to it. The variable shownFor counts
// how many frames the game has been stuck.
#define glider  0xCE
#define genesis 0x5E
#define bomb    0x5D
#define TIMEOUT 7
uint8_t shownFor;

// These are some macros to make coding the Game of Life a little easier. 
// When we update the game, we are always reading from the current state, 
// which is stored in the "disp" display buffer, and we are always 
// writing the next state into the "buff" display buffer. So, we can 
// simplify the Get and Set functions for readability.
#define getLifeRaster(i)    get(disp,i)
#define setLifeRaster(i,v)  set(buff,i,(v))
// We store the display data in a bit-vector which is row-major ordered. 
// The bit vector is indexed by a single number from 0 to 255, but we want 
// to think about the Game of Life in terms of rows and columns. So, 
// this macro just wraps conversion of row and column numbers into an index
// for readability
#define rc2i(r,c)           ((r)*N+(c))
// Finally, combine the index macro with the get and set macros to 
// create macros for reading and writing game state based on the row and
// column numbers. 
#define getLife(r,c)        getLifeRaster(rc2i(r,c))
#define setLife(r,c,v)      setLifeRaster(rc2i(r,c),v)

// One step in the Game of Life is to count the number of neighbors that 
// are "on". This function computes part of that: it counts the number of 
// neighbors that are "on" at the current position (r,c) and also the
// row above and below. This is not a Macro because the Game of Life 
// implementation here was ported from an implementation with much less
// flash memory. When you make a macro, that code is expanded with simple
// text substitution before it even gets to the compiler. This means that
// each time we "call" a macro a bunch of code gets included. Somtimes this
// can be optimized out, but not always. By making this a function, we help
// the compiler understand that each "call" can be computed using the same
// code, and this resulted in a smaller binary. TLDR: esoteric design choice
// for size optimization, IGNORE.
uint8_t columnCount(r,c) {
    return getLife(prev(r),c) + getLife(r,c) + getLife(next(r),c);

// We store the game state in raster-like buffers of pixel data, but we
// scan the display one light at a time based on a list of which lights are
// on. This function converts from the raster-like representation to the
// list reprentation. Once we have prepared a new frame of the game, we call
// this function to create a new list of "on" lights for the display scanning
// code. We also flip the pixel and light-list buffers here. 
void flipBuffers() 
    // flip the pixel buffers
    uint16_t *temp = buff;
    buff = disp;
    disp = temp;
    // scan the new pixel buffer for lit pixels and add them to the list
    // of lighted pixels for sparse scanning
    uint16_t r,c,i;
    uint16_t lightCount = 0;
    for (i=0;i<NN;i++)
        if (get(disp,i))
    //swap the sparse scanning data buffers
    lighted = lighted<lightCount?lighted:lightCount;
    volatile uint8_t *ltemp = lightBuff;
    lightBuff = lightList;
    lightList = ltemp;
    lighted = lightCount;

// pin definitions. We have wired up four different 8x8 LED matrices to
// The AtMega. I have kept the anodes and cathodes contiguous, so that
// For each array, the anodes and the cathodes use a sequential number of 
// Pins. The way I have set up my logical pin numbers, Pins 0..7 are PORTB
// Pins 8..15 are PORTD, and then the rest are PORTC. These arrays store
// The anode or cathode sequence starts for each of the 4 matrices.
const uint8_t cmap[4]  = { 21,9, 7,17};
const uint8_t amap[4]  = { 12,1,15, 4};

// I didn't know which anode/cathode was which when I wired up the arrays.
// LED matrix pinouts can be idiosyncratic like that. However, I did wire up
// each of the arrays in a consistent manner. So, I can fix this by applying
// a permutation in software. These arrays store the pin permuations for 
// the anodes and cathodes.
const uint8_t aperm[N] = {0,2,3,1,7,4,6,5};
const uint8_t cperm[N] = {4,3,1,5,0,6,7,2};

// This function handles the display scanning -- it is called from a timer
// interrupt. The variable scanI keeps track of which light we're currently
// displaying. The variable PWM keeps track of state so that we can turn some
// lights on longer than others. Each time a timer interrupt occurs, this
// code changes which light is displayed. Lights are only shown one at a time.
// The timer interrupt changes them fast enough that they appear to be all
// Illuminated simultaneously.
volatile uint8_t scanI = 0;
volatile uint8_t pwm   = 0;
    // PWM would allow us to vary the brightness by leaving some lights
    // on longer. Here, I just use it to correct for a problem with PORTC.
    // For some reason LEDs on PORTC are more dim than they should be. This 
    // is probably because I am driving the LEDs without current limiting 
    // resistors, and so LEDs on other ports are drawing more current than
    // I expected ( more than 40mA ). Thus, these LEDs are brighter than the
    // ones driven using PORTC. It may have something to do with PORTC also
    // being capable of analog input? Anyway, its unclear if this is bad but
    // this little PWM script seems to fix it. Also, I intentionally waste
    // CPU cycles using a delay loop so that the PWM branch takes as long
    // as the light-scanning branch -- this is so that the fraction of time
    // taken by the display scanning interrupt routine is roughtly constant.
    // This is important because if I returned early from the PWM branch, 
    // The extra CPU cycles would be used by the game logic and the game
    // frame rate would vary. A more correct way to do this is to wait after
    // each frame of the game of life has been computer, based on timer
    // interrupts. But just adding the delay here was easier for me.
    if (pwm)
    // We advance to the next light. If for some reason all of the lights are
    // off we return immediately. Technically, there are race conditions
    // with the flipBuffers() function, but the errors are not catestrophic
    // We only focus on lights that are on, and keep track of how many lights
    // are on in the "ligted" variable. 
    if (!lighted) return;
    // We look up which light should be on, which is stored in the lightList.
    // The lower 4 bits contain the column index, and the upper four bits 
    // contain the row index. Our display consists of 4 8x8 arrays. These are
    // laid out with a charlieplexing pattern. So, we do a test to see which
    // array our light at row, columb (r,c) should be in -- that is what the
    // variable 'b' stored. Then we can lookup which anode and cathode we 
    // should use to turn the light on, now that we know which array to use.
    uint8_t light = lightList[scanI];
    uint8_t r = light>>4;
    uint8_t c = light&0x0f;
    uint8_t b = (r<8?0:1)+(c<8?0:2);
    // Looking up which anode and cathode to use seems strange at first.
    // For each array, I've wired up the anodes and cathodes to be contiguous
    // So that, for example, the fourth array, the cathodes will start at 
    // logical pin 17, and proceed to logical pin 25. Except there are only
    // 21 pins, so it wraps around to the beginnign again rather than counting
    // Up to 24. So, we get the starting in, an count up from there, and use
    // Modulo to wrap around to the beginning again. 
    // Finally, I did not know which cathode or anode was which when I wired
    // up the arrays -- LED matrix pinouts can be rather idiosyncratic like 
    // that. So! The anodes and cathods were all out of order. So I have to
    // look up a permutation to get the right one for our specific row and
    // column. 
    uint8_t an = (amap[b]+aperm[r%8]+21)%22;
    uint8_t ct = (cmap[b]+cperm[c%8]+21)%22;

    // Now that we know which pins to use to turn on the light, we can 
    // actually set the pin state to achieve this. First we turn all the 
    // pins to high impedence ("Off" or "input mode"). This is because, to
    // set the right pin, we have to changes pins in PORTA, PORTB, and PORTC.
    // As far as I know, there is no way to change all three of these 
    // simultanously -- you have to do them one at a time. This means that
    // in the process of changing the pin states we might accidentally turn on
    // some lights we didn't mean to. To work around this, first turn off all
    // the pins, then set the pin state, then turn back on only the pins that
    // we need to use
    // If we are driving the anodes using PORTC, we use a brightness correction
    if (an>=16) pwm = 1;

int main()
    // I happen to be using the internal RC oscillator, which has an unknown
    // frequency. Setting OSCCAL to 0xff means "run the internal RC oscillator
    // "FAST". OSCCAL is supposed to be used for calirating the RC oscillator 
    // so that it runs at a known frequency, but here I just max out the 
    // calibration variable. Another way to do this is to change the fuses so 
    // that     // the RC oscillator runs faster -- but this works too. 

    // I want to configure a timer interrupt to chan the display.
    // I determined these values using the AtMega datasheet, and don't exactly
    // remember what they all mean. 
    // I know that we turn on timer interrupt 0 and set the prescaler 
    // (in TCCR0B) to something -- probably "as fast as you can". 
    // Then, we set it to "compare to counter" mode. This means that the
    // counter will incremement, and when it gets to OCR0A ( set here to 180 )
    // it will call our display scanning code. This lets you tweak how often
    // the display is scanned. If it too slow, you will see flickering. If
    // it is too fast, there will not be enough CPU cycles left over to run
    // the game logic.
    TIMSK0 = 2;  // Timer CompA interupt 1
    TCCR0B = 2;  // speed
    TCCR0A = 2;  // CTC mode
    OCR0A  = 180;// period
    // Initialize the game to a random state, and update the lightList to
    // reflect this.
    uint8_t r,c;
    for (r=0;r<16;r++) buff[r] = rintn();
    // Run the Game of Life
    // There are some tedious optimizations here. 
    // To compute the next frame in the Game of Life, one has to sum up
    // the numbr of "live" or "on" neighbors around each cell.
    // To optimize this, we keep a running count. 
    // When we want to count the number of "live" cells at position (r,c),
    // We look at how many cells were alive around position (r,c-1). Then, we 
    // Subtract the cells from column c-2 and add the cells from column c+1.
    // One could also keep a running count along the columns, but this
    // would require more intermediate state and the additional memory lookups
    // consume the benefit in this application. 
    // As we update the game, we also keep track of whetehr a cell changes. 
    // If cells are not changing, we make note of this, and if the game stays
    // stuck for a bit, then we add in new life-forms to keep things
    // interesting. 
    // One neat thing about double-buffering the game state is that the 
    // state from TWO frames ago is available in the output buffer, at least,
    // until we write the next frame over it. So we can take advantage of this
    // and also detect when the game gets stuck in a 2-cycle.
    while (1) 
        uint8_t changed = 0;
        uint8_t k=0;
        for (r=0; r<N; r++)
            uint8_t previous = columnCount(r,N-1);
            uint8_t current  = columnCount(r,0);
            uint8_t neighbor = previous+current;
            for (c=0; c<N; c++)
                uint8_t cell = getLife(r,c);
                uint8_t upcoming = columnCount(r,next(c));
                neighbor += upcoming;
                uint8_t new = cell? (neighbor+1>>1)==2:neighbor==3;
                neighbor -= previous;
                previous = current ;
                current  = upcoming;
                changed |= new!=cell && new!=get(buff,rc2i(r,c));
                k += new;

        uint8_t l=0;
        if (!(k&&rintn())) l=genesis;
        if (!changed && shownFor++>TIMEOUT) l=genesis;
        if (l) {
            uint8_t r = rintn();
            uint8_t q = rintn();
            uint8_t a = rintn()&1;
            uint8_t b = rintn()&1;
            uint8_t i,j;
            for (i=0;i<3;i++)
                uint8_t c = q;
                for (j=0;j<3;j++) 
                    l <<= 1;
                    c = a?next(c):prev(c);
                r = b?next(r):prev(r);
            shownFor = 0;


EMERGE 2013 Retrospective

The past weekend saw EMERGE 2013 at ASU.  The theme was “The Future of Truth”, and there was a level of carnival creativity rarely seen in the rather stolid world of academia; there were dancers, full body 3D scanners, philosophy in public, and similar insanity.

A full accounting of the speeches and events that went on at EMERGE is beyond me, but I’d like to note a few highlights. Brad Allenby remains eminently quotable and provocative, playing clips from 2001: A Space Odyssey and advising us not to make out with strange monoliths. Claire Evans of YACHT gave a dangerously smart talk on how rock and roll is a post-modern cult for the 21st century. In a panel on “The Myth of the Future”, Bruce Sterling declined to found a sci-fi cult (dammit! I’d bring the Kool-Aid), while Betty Sue Flowers discussed global myths and Brian David Johnson mediated between the two, advising us to take control of our own future stories.

My part of EMERGE was a workshop called “Truth and Atrocities: What is the Future of Investigating Human Rights Violation in the Age of Facebook?” which Dan Rothenberg, a legitimate human rights lawyer and expert on truth commissions, was kind enough to let me help out with. Truth commissions are part of what is called ‘transitional governance”, the process of taking a country from a period of dictatorship or civil war (and associated atrocities) and building civic society and robust democratic institutions. They aren’t war crimes tribunals, as people are rarely charged with a crime, but they are instead intended to lay a common factual truth of what happened, to give voice to victims, and allow forgiveness of perpetrators so that the culture can heal and move forward.

The first truth commission was established to deal with the fate of The Disappeared, victims of the Argentina military junta who were abducted, tortured, and finally murdered, with these actions comprehensively disavowed by the State. The Argentinian Commission on the Disappearance of Persons recorded the names of the victims, the location of secret prisons and graves, and generally made it impossible to ignore the crimes of the old regime.

Dan and I decided to focus our discussion on drones, since they’re a controversial issue which may require a truth commission in the future—as the next generation of policy-makers will have to reconcile the common knowledge of the Drone War, with official administration denials that any strikes are taking place, and in any case, only terrorists are harmed (a patent lie). Our group included the awesome Jasmina Tesanovic, along with a full spectrum of students, professors, and journalists willing to argue for and against drones. On the second day we took up the roles of a Truth Commission investigating a drone strike in 2019, establishing a detailed sequence of personal narratives that looked at this one event from many perspectives.

The participants did an amazing job making the events of that day come alive. For my own perspective, I began to question my technological conservatism on autonomous drones. While current policy requires that a human being pull the trigger, future drones which are designed to operate in more hostile environments may have more independence of motion and sensor fusion and analysis. Of course, a human will still have to give the kill order, but the drone might wait several minutes before firing to maximize a hit and minimize collateral damage. Once those capacities are in place, an ‘ethical governor’ that determines that the whole mission is wrong does not seem so unrealistic. A sudden call from our drone, an MQ-47 named “Sparky”, brought the house down.

Otherwise, I had several great conversations with the brilliant Caitlin Burns of Starlight Runner Entertainment. Her company is responsible for Forward Unto Dawn (best military scifi of the past decade), and I am firmly convinced that gaming, literature, film, art, advertising, and maybe even politics are blending into some new thing. Less sure if that’s a good thing, necessarily, but it’ll be interesting.

Of course, no conference on a topic as big as “The Future of Truth” could end with answers, and so I’d like to pose two big questions I’m left with.

Truth Commissions have traditionally been conducted through oral history (speaking has a healing value) and forensic examinations of field sites and archives. Drones and camera phones (even Third World Peasants have camera phones now) have introduced massive proliferation of video into post-2000 investigations. Does the number of cameras in contested zones make atrocities harder to commit and get away with? Conversely, does the potential for omnipresent video footage mean that old-fashioned oral testimony is less credible? Truth commissions operate from a place of empathic distance: how do these technologies make that perspective easier or harder to obtain? Dick Fink threw together a 6 panel video mashup of drone footage and Arab Spring clips (thanks bro!) to help illustrate the panel. Drone footage is mesmerizing in its abstraction—black and white IR dots, and then an explosion, and then some of them stop moving. Conversely, cellphone videos—grainy, jerky, poorly framed as they may be, have an undeniable presence. It is difficult to maintain distance just hearing about massacres. What if you had those atrocities caught on video? Could the past ever fade away?

Second was about the long term purpose of EMERGE. As Bruce Sterling said in his keynote discussing Vaclav Havel, there’s a big difference between having fun and being provocative, and dealing with the administrivia necessary to keep things running. As EMERGE becomes an institution, something that happens more than once or twice, how will it find its purpose beyond being an intellectual festival? How can bringing artists and scientists together for a few days help advance ASU towards the three moonshots of health-span extension, sustainability, and educational transformation? (as laid out by President Crow in the morning). I think that a sense of fun, of transdisciplinary public engagement, of an intellectual adventure, can be very beneficial for a scholarly community. I hope that future EMERGEs live up to the high standards of this one.


inthecontinuum on tumblr: many trippy images

... and the author routinely provides Mathematica source for rendering. I recommend it


DIY TinyMarquee: an Attiny24 based scrolling marquee

Ever looked at a scrolling LED marquee, displaying news headlines and other worldly information, and thought, "I need that"? Well, as a little project we've built one from the ground up, hardware, firmware, software and all. This was conducted as an educational exercise in  PCB design, charlieplexing, software serial on AVRs, and web scraping.

I humbly present these notes in hope that they will provide some direction if you embark on a similar project. The source code and design files are up on a git repository. (All of this work is done and tested on Linux.) Edit: I made the python scripts more modular, but they are now interdependent, so if you're poking around with this, grab the entire Git repository to avoid missing functions, &c.

This is hardly the first DIY Marquee project out there in the wild. Here is a large one that appears to be all hand-routed on protoboard.  Here is one made from Christmas lights. Here is a network enabled one. One using ping-pong balls. And here is one implemented in the rear window of a car. To the best of my knowledge, this DIY project is unique in using charlieplexing and software serial to push the capabilities of the AtTiny24.

A terminal stock ticker

The python module ystockquote can pull stock prices and other information from Yahoo finance. With this library, grabbing a stock price is as easy as "get_price(stockname)". So, we have a short python script to download stock prices, format them, and save to a local file. There is also a less well tested news headline scraper.

Bitmap fonts 

We need to translate text into a format that we can send to a scrolling marquee. We send raw pixels to the marquee ( so that we can adjust fonts and graphics without re-loading the firmware ). To accomplish this, we send new columns as 5-bit integers over serial. But before we get to that, we need to make a 5-pixel high font. I used Gimp to draw a font and a short Jython script to convert it into bit-packed integers representing columns of pixels for each letter.*

To test converting text to the bitmap font, the script "scroll" takes the scraped data and scrolls it across a simulated marquee in the terminal ( there is also the short "terminal_marquee" script which scrolls indefinitely, but updates intermittently in the background )


The hardware consists of 90 3mm LEDs arranged in a 5x18 grid. These are driven by 10 IO lines of an AtTiny24. The remaining free IO line is used to poll and listen for serial data. There is also a 10K pull-up resistor on the AtTiny's reset pin, and a 0.1μF decoupling capacitor near the power pins. The surface mount AtTiny bridges one row of the LED pins. This is mildly annoying to solder but not prohibitively difficult if you are already practiced in SMT soldering. Thats it! not much to it. The marquee gets power and data from a USB to TTL serial adapter.


Charlieplexing is a way to drive tons of LEDs form only a few pins. Since LEDs only light up when current is passed in one direction, you can place two LEDs for every unique pair of IO lines at your disposal. This lets you drive N*(N-1) LEDs from N IO pins.**

PCB design

Update : I apologize for the lack of schematic. None exists. The layout was done entirely within Eagle's board editor without a schematic step -- this is all I have.

While it is possible to wire up the grid of LEDs by hand, I would not recommend it. Instead of going through this tedium, you can design a custom board and get it professionally fabricated. I use the free version of Eagle Cad to design and prepare boards for manufacture. A full Eagle tutorial is beyond the scope of this writeup, but numerous tutorials can be found elsewhere online. The Eagle design files for this project can be found here.

Exporting gerber files for board fabrication

One you have finalized a board, you need to prepare design files for fabrication. PCB designs get exported to so called "Gerber" files, which are like the PDFs of circuit board design. Once you have these files you can send them off to a fab house for production. My favorite tutorial for this is on Hackaday.

For one-off boards, BatchPCB is the go-to place. For small runs, consider Advanced Circuits or Seeed studio's Fusion PCB service. For larger runs (more than 30), depending on board size, Goldphoenix is the place to go. Depending on which service you choose, you should get boards in a few days to five weeks. I used Seeed because it is relatively cheap, and it took about a month to ship to the US.

Part sourcing 

For cheap LEDs I use Ebay. For all other components ( especially the AVR microcontollers ), I source from Mouser or Digikey. For a low cost USB to Serial adaptor, look for "USB To RS232 TTL PL2303HX" on Ebay . These are cheaper than, say, an FTDI cable from Sparkfun, and have worked great for me. I'd hoped to save a few bucks by using charlieplexing and the AtTiny14 in the design -- the total cost of each board, shipping and USB-TTL converter included, from a lot of 10, is about $7.50:
10   boards     $30   
1000 LEDs       $10   
10   AtTiny     $14   
50   Resistors  $ 1   
10   Capacitors $ 1   
10   Headers    $ 1   
10   USB-TTL    $18   
       /10      $ 7.50

Cleaning the boards 

After you're finished soldering, remove any solder flux adhered to the board. Apart from being unsightly, flux can be conductive and corrosive and damage the board over time. Hackaday has a good tutorial on this. We filled an old jar with 90% Isopropanol, dunked the boards in there, and shook them around for a while -- it worked wonderfully.

Software serial 

The Attiny24 does not have hardware support for serial ( UART ), so we'll have to make a software implementation. For information about the RS-232 communication protocol, see the wiki. I used polling at twice the serial rate (4800Hz) to monitor incoming serial data, which is works for transmitting five bit packets. Further details about the firmware can be found in the C source file.

Compiling with avr-gcc 

I never remember the commands to compile and upload firmware, so here are the commands for future reference.

#compile source to a file a.o targeting the AtTiny24
avr-gcc -Os -mmcu=attiny24 ./display_serial.c -o a.o 

#extract the text and data sections to make a binary for the AVR
avr-objcopy -j .text -j .data -O binary a.o a.bin 

# check the size ( this should be smaller than the amount of available flash )
du -b ./a.bin

# upload the binary to ( in this case ) the AtTiny24 
avrdude -c avrispmkII -p t24 -B4 -P /dev/ttyUSB1 -U flash:w:a.bin 

So, now what?

I'm not sure. I guess these will probably just sit in my desk for a while. The news was getting depressing and I honestly don't care about stock prices, so for now it's just a marquee clock. I'm open to suggestions for cool applications of this hardware.


*If you don't have Jython or a working JVM installed, it may be easier for you to re-enter the font as text data and write a short conversion routine in python.

**If you're familiar with multiplexing there's a simple way to conceptualize board layout for charlieplexing. When you multiplex an NxN grid of LEDs, you use N IO lines to control power to the (-) ends of the LEDs, and N IO lines to control power to the (+) ends of the LEDs.  To go from multiplexing to charlieplexing, note that microcontroller pins can take on three sates : Low (-), High (+), and Off ("high impedence"). Each IO line can serve as both a (+) line and a (-) line. What happens if we use the same N pins to drive both the (-) and the (+) a multiplexed display? Everything works fine, as long as we use the "off" state to stop current. One problem: there some LEDs along the diagonal of the matrix that have their (+) and (-) driven by the same IO pin -- there is no way to make these light up. But, no worries, since we are laying out our own board, we can just delete these LEDs, or connect their (+) terminals to a reserved IO pin to regain control of them! Charlieplexed PCB design can be relatively simple: lay out a grid of LEDs as if you were to multiplex them, but connect the N anodes to the N cathodes, and either delete the N LEDs that end up being connected to the same IO line at both ends, or wire these up to a separate IO pin to finish things off.