You weary giants of Roombas and broomsticks

Today, I was reading "I am a strange loop", and while immersed in a story about virtual presence, I realized that it would be really cool if you could build a telepresence robot out of, say, a Roomba, a broomstick, and a Macbook.

Some clever guys obviously beat me to it. Check out these. For those who don't want to bother clicking the link: it's a telepresence robot that looks like... well, not all that different from a Macbook on a broomstick on a Roomba. Note that they plan to sell these at "between $1800 and $3000" in "2008"; by my estimate, the lower end of that might be less than the cost of the parts you'd need to make one.

And here's another fun question: what is the interaction of telepresence and immigration laws? If I live in one country, but my job is "in" another, where am I legally employed? This question first became very real to me when I lived in the US as a student and wondered what the legal consequences would be if I, through the primitive telepresence technologies of e-mail, telephone and ssh, were to do a little free-lance work for a Dutch company in Holland, paid in Euros on my Dutch bank account, with the Dutch company possibly never even realizing I was in California? (I never did it, but an American I know here does do the exact opposite.)

And why shouldn't he? Given ever improved telepresence technologies (I am really starting to like that buzzword, even though it's probably already gone out of style), immigration laws start seeming not only backwards and selfish, but positively hilarious. And that last thing is a good thing, because in the end, the only way you can really fight The Man is to poke fun at Him...

I have more to say about the political implications of this, but meanwhile, you can ponder whether John Perry Barlow was onto something when he wrote this (emphasis mine):

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

We have no elected government, nor are we likely to have one, so I address you with no greater authority than that with which liberty itself always speaks. I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear.

Governments derive their just powers from the consent of the governed. You have neither solicited nor received ours. We did not invite you. You do not know us, nor do you know our world. Cyberspace does not lie within your borders. Do not think that you can build it, as though it were a public construction project. You cannot. It is an act of nature and it grows itself through our collective actions.


RepRap : a self-replicating rapid prototyper

RepRap is a project to make a rapid prototyping machine (aka 3D printer) which can build most of its own parts, with a total cost of under $500. There are already several working prototypes, and they "hope to announce self-replication in 2008".

"RepRap etiquette asks that you use your machine to make the parts for at least two more... for other people at cost."

If this achieves the exponential growth that they're obviously aiming for, it will enable open source distributed development of physical objects (including of course itself), which would be nothing short of revolutionary.

And their canonical test object is a shotglass.



An interesting short story by futurist Marshall Brain. It has implausibilities, it's really preachy in places, and I dislike most futurists as a rule, but I thought it was thought-provoking enough to be worth reading. Opinions are welcome.


Fun with stem cells

More on crowd feedback

Everyone has cellphones now, right? If you had a few highly directional antennae you might be able to use the amount of RF activity in a few cellphone bands as an approximation to crowd activity. You could maybe also look for Bluetooth phones and maybe remember individual Bluetooth ID's, although I'm not sure if most phones will respond to Bluetooth probes in normal operation.

Another approach would be suitable for a conference or other event where participants have badges. Simply put an RFID tag in each badge and have a short-range transceiver near each display. Now the art not only responds to aggregate preferences, but it also personalizes itself for each viewer. Effects which have previously held a participant's attention will appear more often when that participant is nearby. This will probably result in overall better evolutionary art -- instead of trying to impress the whole crowd, which is noisy and fickle, the algorithm tries to impress every individual person. While it's impressing one group, other people may be attracted in, and this constitutes more upvotes for that gene.

I think one important feature for this to work effectively is a degree of temporal coherence for a given display. If they're each showing short, unrelated clips (like Electric Sheep does), then people will not move around fast enough for the votes to be meaningful. Rather, each display should slowly meander through the parameter space, displaying similar types of fractal for periods on the order of 10 minutes (though of course there may be rapidly-varying animation parameters as well; these would not be parameters in the GA, though their rates of change, functions used, etc. might be).


Idea : fractally compressed AR

This is an augmented reality idea I had while walking around looking at trees after Drop Day. Basically, one would wear a VR headset that displays imagery from the outside world, except that occurrences of similar visual objects get replaced with the exact same object, or the same object perturbed in some synthetic way.

So, for example, the leaves of a tree would get replaced with fractals that are generated to look like leaves. As another example, areas of the same "texture" could be identified (basically, areas with little low-frequency spatial component, possibly after a heuristically determined perspective correction). Then a random small exemplar patch is selected and used to fill the entire area with Wei & Levoy / Ashikhmin-style synthetic textures.

The point of all of this is that you're essentially applying lossy compression (by identifying similar regions and discarding the differences between them), then decompressing and feeding the information into the brain (and thus mind). Working on the assumption that consciousness essentially involves a form of lossy compression which selects salient features and attenuates others, you can determine the degree and nature of this compression by determining when a similar, externally applied compression becomes noticeable or incapacitating.

My guess is that there will be a wide range of compression levels where reality is still manageable and comprehensible but develops a highly surreal character. Of course to experiment meaningfully you'd need a good enough AR setup that the hardware itself doesn't introduce too much distortion, although you could also control for this by having people use the system without software distortions.

The McCollough effect: a high-level optical illusion

See here for a demonstration, if you're not familiar. It seems like an afterimage effect at first, but can last for weeks, apparently affects direction-dependent edge detection in V1, and correlates with extroversion. Weird, eh?

BLIT : a short story

BLIT: a short story by David Langford

Terrifyingly relevant to what Mike and I are working on.

"2-6. This first example of the Berryman Logical Image Technique (hence the usual acronym BLIT) evolved from AI work at the Cambridge IV supercomputer facility, now discontinued. V.Berryman and C.M.Turner [3] hypothesized that pattern-recognition programs of sufficient complexity might be vulnerable to "Gödelian shock input" in the form of data incompatible with internal representation. Berryman went further and suggested that the existence of such a potential input was a logical necessity ...

... independently discovered by at least two late amateurs of computer graphics. The "Fractal Star" is generated by a relatively simple iterative procedure which determines whether any point in two-dimensional space (the complex field) does or does not belong to its domain. This algorithm is now classified."

What do you think the odds are that we make something like this?

idea : VR for consciousness hacking

Ooh, interpolating tessellations is an awesome idea. You'd basically have to interpolate under a constraint, that some parts of the spline line up with other parts. But since this constraint is satisfied at all reference points, I think it would be doable.

I've been thinking lately about virtual reality as a tool for consciousness hacking. VR as played out in the mid-90's was mostly about representing realistic scenes poorly and at great expense. But I think we can do a lot with abstract (possibly fractal-based) virtual spaces, and the hardware is much better and cheaper now. The kit I'm imagining consists of:

  • 3D stereoscopic head-mounted display with 6DOF motion tracker (like this)
  • High-quality circumaural headphones (like these)
  • Homemade EEG (like this)
  • Possibly other biofeedback devices (ECG, skin resistance, etc.)
  • Intuitive controllers (e.g. data glove like this, camera + glowing disks for whole-body motion-tracking, etc.)
  • A nice beefy laptop with a good graphics card
  • Appropriate choice of alphabet soup and related delivery mechanism, if desired
  • A locking aluminum equipment case with neat foam cutouts for all of the above
With the right software this can obviously do a great many things. For example, I've found that after experimenting with a graphics effect for a while, I develop the ability to hallucinate the same effect. With more control over the training period it might be possible to train more complicated effects, determine how much computation versus playback of prerecorded samples is going on at "runtime", and determine on what level(s) of abstraction the hallucinated data manifests. Of course, for actual scientific results we'd need to duplicate the experiments over many people, but personally I'm more interested in hacks that give me greater access to and understanding of my own mind.


Idea : crowd feedback

This is another idea relating to video feedback systems. Imagine an exhibition of a system like Perceptron on several monitors throughout a gallery space. A set of cameras watches the crowd from above, and uses simple frame differencing and motion detection algorithms to determine a map of activity level across the space. This then feeds into the video system; perhaps each region of the space is associated with some IFS function, seed point, or color, and the activity level in that region determines how prominently that feature affects the images.

Each monitor can display a different view of the same overall parameter space, so at any given time there will be some "interesting" and some "boring" monitors. Viewers are naturally drawn towards more "interesting" (by their own aesthetic senses) monitors, and in moving to get a better look they affect the whole system. In essence, the aesthetic preferences of the viewers (now participants) become another layer of feedback.

If Hofstadter is right, and "strange loops" give rise to conscious souls, then should the participants in such an exhibition be considered as having (slightly) interconnected souls? If so, how does the effect compare in magnitude to the interconnectedness we all share through the massive feedback loop of the everyday world? Does this effect extend to the artist who makes the system and then sits back and watches passively? What about the computer that's making it all happen? Is any of this actually "strange" enough to be considered a strange loop? All of these questions seem fantastically hard to answer, but it's a lot of fun to think about.

Idea : the laser glove

This is an idea I had for a simple, dirt-cheap glove-based input device, for intuitive control of audio/video synthesizers like Perceptron and such. It consists of a glove with a commodity red laser pointer oriented along each finger. This allows the user to control the shape, size, and orientation of a cluster of laser dots on a screen. A webcam watches the screen and uses the dots to control software.

The software could do any of a number of things. One approach would be to fit the dot positions to a set of splines, then use properties of these splines such as length, average direction, curvature, etc. as input parameters to the synthesizer system. At Drop Day we had a lot of fun pointing lasers at feedback loops, and that was without any higher-level processing.

Laser pointers are now less than $1.00 each, bought in bulk on eBay. (I have several dozen in my room right now.) I don't know of a good way to differentiate one dot or user from another without adding a lot of complexity, but I think cool effects could be had by just heuristically detecting dot formations. The emergent behavior from multiple interacting users might be desired, anyway.

On a slightly related note, here's some guy's MIT Media Lab thesis that I found while browsing the Web one fine day: Large Group Musical Interaction using Disposable Wireless Motion Sensors


I'm on the Interblag!

I finally got around to writing something for this blog. First, I have a quotation from our beloved Admiral:

"Do you want this fire extinguisher? No? How about this tag that says 'do not remove under penalty of fire marshal'?"

Second, the Perceptron screenshots on here look really damn good, and I decided I should post some of what I've been working on lately as well. This started yesterday as a CS 101 assignment and kind of got out of control.

There's also a movie here of me playing with it. It's 56M and still looks shitty due to compression, but you can at least get the idea. May cause seizures in a small percentage of the population.

Update: The video is (maybe) on YouTube now:


Jesus Died For Somebody's Sins, But Not Mine

In my college tour of doom, the first school I had the joy of visiting was Hampshire. My good friend Dusty goes there, and she told me to get in early, because Sunday was their annual Easter Keg Hunt, a magical time where students from all over the 5 colleges come to Hampshire to get their drunk on.

I arrived at 1:45, called my contact Dusty, and she immediately handed me a mug full of delicious high quality beer that tasted of blueberries and love. We went back to her place, where they had a keg of the stuff in the shower. Several glasses later, I found myself in an another apartment with a massive stoner circle of hippies, passing around Isaac Haze, one of the the largest, most powerful bongs I have ever experienced. I ran into a guy who'll I'll call Dave, a Physics/Math double major who "took a lot of acid, and realized that reality was an illusion, and that only through insanity could we find the truth." Generally, one of the crazier people I've met.

Shortly thereafter, a bunch of us went for a walk in the woods. We had only got 20 feet in when we found another keg. After more drinking and wandering, I got cold, and headed back to Dusty's place. Inside were a bunch of U Mass students, and some hot Hampshire artists, who were passing around these truly epic blunts and playing Mario Kart 64. After 3 of those deadly fatties, I felt the tiredness coming on, and I passed out on Dusty's floor.

All in all, a truly epic day.