variation on immersive video with one projector

Something like this, drop day 2008 ?

Another variation, which would be more difficult to construct, is a dome. This would require a projector raised and stabilized many feet above the dome, or multiple projectors.


Stolen from here, also this




k: human consciousness is a very complicated thing
k: i can only hope that it's based on a few simple tricks
m: the cortical maps certainly make it seem like its less complicated than previously indicated
m: its pretty frightening actually
m: the experience of information.. being nothing more than the existence of that information
k: yeah
k: it is
m: its a statement of identity
m: that should have been obvious

More Pretty

This version should be available for download from here.

The following have been added :
  • new maps
  • more color modes
  • changed the reflection modes
  • on screen state display
  • ability to define custom maps at runtime ( shift + E )
  • ability to save screenshots and settings ( shift + S )
  • ability to define presets in the settings file
I suppose I should probably write down what the various keys do... but guessing randomly will be more fun.


Biff and I have been playing with Perceptron....



from z|z| + c


Its a thing... it does stuff... maybe

This probably won't work, if it does I can't explain how or what its supposed to be doing. Most of the rules it generates seem to degenerate into point clouds or rings. I'm sorry, theres nothing much exciting going on here, life's been pretty dull.



It was probably the first time anyone had handed this tattoo artist a copy of The Human Brain Coloring Book as reference material, but this didn't seem to be a career milestone. He liked that it was braaainnnzz, though. So did some of the other artists. They kept stopping in to hold conversations, many consisting (so far as I could tell, with my head mashed into the chair) of "Ooh, it's a braaainnnzz, eh?"

The white things are neurons; the quote is from Hilbert the year before Goedel published his Incompleteness Theorem. (Likewise I have answers to "zomg wedding dress," "zomg AIDS," and "wtf," although the last is a bit involved.) The only unfortunate thing at this point is the total failure of such an endeavor to generate a dinner story. Ah well. Perhaps indirectly.


Conformal maps on photography

I just found a Flickr set with some cool examples of applying conformal maps to photography.


Prototype II

Flame-like fractals, comprised mostly of integral powers and assorted trigonometric maps.


The first image from an effort to merge perceptron and electricsheep


Fractal neurofeedback

There's an article on the Mind Hacks blog that overlaps heavily with the kind of stuff we talk about here. Their output looks Sheep-ish; since my realtime Electric Sheep renderer is working now, maybe I'll build an OpenEEG box and bang out an open source alternative.


More screenshots

Here's the latest.

Edit: I've added some more screenshots to the gallery, with tasty vertical symmetry imposed by mirroring.

Here I'm trying out some different maps, and also incorporating a camera feed, which is what gives it the more fuzzy, organic look. The geometric patterns with n-way radial symmetry come from z' = z*c, which gives simple scaling and rotation. The squished circles come from z' = sin(real(p) + t) + i*sin(imag(p)), where p = z^2 + c and t is a real parameter.


More fractal video feedback

I've been working on a new implementation of the fractal video feedback idea. Unlike the previous attempts, the code is nice and modular, so complicated bits of OpenGL hackery get encapsulated in an object with a simple interface. It's still very much a work in progress, but I thought I'd share some results now. Feedback (no pun intended) is very much appreciated.


Shoving the video through the YouTubes kills the quality. I have some higher quality screenshots in a Flickr gallery. Some of my favorites:

The basic idea is the same as Perceptron: take the previous frame, map it through some complex function, draw stuff on top, repeat. In this case, the "stuff on top" consists of a colored border around the buffer that changes hue, plus some moving polygons that can be inserted by the user (which aren't used in the video, but are in some of the stills). In these examples, the map is a convex combination of complex functions; in the video it's z' = a*log(z)*c + (1-a)*(z2+c). Here z is the point being rendered, z' is the point in the previous frame where we get its color, c is a complex parameter, and a is a real parameter between 0 and 1.

There are two modes: interactive and animated. In interactive mode, c and a are controlled with a joystick (which makes it feel like a flight simulator on acid). The user can also place control points in this (c,a) space. In animated mode, the parameters move smoothly between these control points along a Catmull-Rom spline, which produces a nice C1 continuous curve.

The feedback loop is rendered offscreen at 4096x4096 pixels. Since colors are inverted every time through the loop, only every other frame is drawn to the screen, to make it somewhat less seizuretastic. At this resolution, the system has 48MB of state. On my GeForce 8800GTS I can get about 100 FPS in this loop; by a conservative estimate of the operations involved, this is about 60 GFLOPS. I bow before NVIDIA. Now if only I had one of these...


David Darling: Equations of Eternity

Subtitle: "Speculations on Consciousness, Meaning, and the Mathematical Rules that Orchestrate the Cosmos." In my defense for checking it out of the library, the subtitle of Crick's Astonishing Hypothesis is "The Scientific Search for the Soul," and that was the publisher's fault.

It turns out, though, to be just as amusing as the subtitle would suggest. In particular, it contains the first serious exposition of the homunculus fallacy I've ever read:

Not least, the forebrain serves as the brain's "projection room," the place where sensory data is transformed and put on display for internal viewing. In our case, we are (or can be) actually aware of someone sitting in the projection room. But the fish's forebrain is so tiny that it surely possesses no such feeling of inner presence. There is merely the projection room itself, and a most primitive one at that.
This occurs, thankfully, on page 7; and it's a determined reader who's made it through manglings of the Heisenberg uncertainty principle and the word "evolve." But if you can stomach any more of this guy I'd bet the rest of the book is hilarious.

On a side note I don't know if the following argument makes any sense:

Say you have a perfect digital model of a finite universe containing conscious beings. Assume anything that appears random in our world (i.e. exact positions of subatomic particles) may be modeled as pseudorandom. So we have an extrinsically explicit representation of the world, but not identity; characteristics of one particle are represented by the states and relationships of many other particles. The way the data is organized, from our point of view as programmers, can't possibly make the difference between whether the simulated creatures are actually conscious or not. Either they are, and perhaps we should consider the ethics of writing murder mysteries, or they are not, and there is something very special about the most efficient form of information storage. Or our universe isn't finite and so we don't have to care.


Arduino and ultrasonic rangefinders

If you've been following new media art blogs at all, you've probably heard of Arduino. Basically, it puts together an AVR microcontroller, supporting electronics, USB or serial for programming, and easy access to digital and analog I/O pins. The programming is very simple, using a language almost identical to C, and no tedious initialization boilerplate (compare to the hundreds of lines of assembly necessary to get anything working in EE 51). This seems like a no-hassle way to play with microcontroller programming and interfacing to real-world devices like sensors, motors, etc.

Another cool thing I found is the awkwardly named PING))) Ultrasonic Rangefinder. It's a device which detects distance to an object up to 3 meters away. A couple of these strategically placed throughout a room, possibly mounted on servos to scan back and forth, could be used for crowd feedback as we've discussed here previously. They're also really easy to interface to.

Update: I thought of a cool project using these components plus an accelerometer, in a flashlight form factor. The accelerometer provides dead-reckoning position; with rangefinding this becomes a coarse-grained 3d scanner, suitable for interpretive capture of large objects such as architectural elements (interpretive, because the path taken by the user sweeping the device over the object becomes part of the input). I may not be conveying what exactly I mean or why this is cool, but this is mostly a note to myself anyway. So there.



Earlier this week, I experienced a religious revelation.

I was reading a truly depressing article about the economy in the NYT and listening to Take a Bow by Muse, and God talked to me. I felt a tingling of strange energy, and I felt a urge to stand up. I stretched out my arms, looking up towards some strange, invisible light. I began to tremble, and as the song hit its climax, I fell into my bed, and received this message.

"We are fucked. We have put our faith in a system which is smoke and mirrors, and it is falling apart. All debts are being called in, and we cannot cover them. You were too clever by half, and nows its going to fuck you up."

"Switch your portfolio to gold and guns."

Fortunately, I am an atheist, and don't have to listen to God.


Extraction of musical structure

I think my next big project will involve automatically extracting structure from music. Mike and I had some discussions about doing this with machine learning / evolutionary algorithms, which produced some interesting ideas. For now I'm implementing some of the more traditional signal-processing techniques. There's an overview of the literature in this paper.

What I have to show so far is this:

This (ignoring the added colors) is a representation of the autocorrelation of a piece of music ("Starlight" by Muse). Each pixel of distance in either the x or y axis represents one second of time, and the darkness of the pixel at (x,y) is proportional to the difference in average intensity between those two points in time. Thus, light squares on the diagonal represent parts of the song that are homogenous with respect to energy.

The colored boxes were added by hand, and represent the musical structure (mostly, which instruments are active). So it's clear that the autocorrelation plot does express structure, although at this crude level it's probably not good enough for extracting this structure automatically. (For some songs, it would be; for example, this algorithm is very good at distinguishing "guitar" from "guitar with screaming" in "Smells Like Teen Spirit" by Nirvana.) An important idea here is that the plot can show not only where the boundaries between musical sections are, but also which sections are similar (see for example the two cyan boxes above).

The next step will be to compare power spectra obtained via FFT, rather than a one-dimensional average power. This should help distinguish sections which have similar energy but use different instruments. The paper referenced above also used global beat detection to lock the analysis frames to beats (and to measures, by assuming 4/4 time). This is fine for DDR music (J-Pop and terrible house remixes of 80's music) but maybe we should be a bit more general. On the other hand, this approach is likely to improve quality when the assumptions of constant meter and tempo are met.

On the output side, I'm thinking of using this to control the generation of flam3 animations. The effect would basically be Electric Sheep synced up with music of your choice, including smooth transitions between sheep at musical section boundaries. The sheep could be automatically chosen, or selected from the online flock in an interactive editor, which could also provide options to modify the extracted structure (associate/dissociate sections, merge sections, break a section into an integral number of equal parts, etc.) For physical installation, add a beefy compute cluster (for realtime preview), an iPod dock / USB port (so participants can provide their own music), a snazzy touchscreen interface, and a DVD burner to take home your creations.


Simple DIY multitouch interfaces

Multitouch interfaces are surprisingly easy to make. Here's a design using internal reflection of IR LED light in acrylic, and here's an extremely simple and clever design using a plastic bag filled with colored water. Minority Report here we come.

OpenCV : open-source computer vision

OpenCV is an open source library from Intel for computer vision. To quote the page,

"This library is mainly aimed at real time computer vision. Some example areas would be Human-Computer Interaction (HCI); Object Identification, Segmentation and Recognition; Face Recognition; Gesture Recognition; Motion Tracking, Ego Motion, Motion Understanding; Structure From Motion (SFM); and Mobile Robotics."

Sounds like some of this could be pretty useful for interactive video neuro-art, or whatever the hell it is we're doing.


What if everything in the past has been a long string of coincidences. Where we observe and infer the law of gravity it is just a coincidence that all those times stuff fell down. Balls flying through the air could have turned left, but they always happened to go straight. All natural laws could just be a highly improbable string of events.

In an entirely different but similar fiction:
Our universe appears to be free of contradictions. What if the many worlds hypothesis were true and there are often branches where some inherent contradiction occurs. These collapse/explode/disappear as the they occur and by the anthropological principal we only see consistent branches.


Whorld : a free, open-source visualizer for sacred geometry

From the homepage:

"Whorld is a free, open-source visualizer for sacred geometry. It uses math to create a seamless animation of mesmerizing psychedelic images. You can VJ with it, make unique digital artwork with it, or sit back and watch it like a screensaver."


From the artist's page:

Flock is a full evening performance work for saxophone quartet, conceived to directly engage audiences in the composition of music by physically bringing them out of their seats and enfolding them into the creative process. During the performance, the four musicians and up to one hundred audience members move freely around the performance space. A computer vision system determines the locations of the audience members and musicians, and it uses that data to generate performance instructions for the saxophonists, who view them on wireless handheld displays mounted on their instruments. The data is also artistically rendered and projected on multiple video screens to provide a visual experience of the score.

Perhaps you've already seen it, but I really like aleatoric music.


Idea: music visualization with spring networks

The basic idea is to connect a collection of springs into an arbitrary graph, then drive certain points in this graph with the waveform of a piece of music (possibly with some filtering, band separation, etc.) This could be restrained to two dimensions or allowed unrestricted use of three.

Spring constants could be chosen so the springs resonate with tones in the key of the piece. Choosing these constants and the graph connectivity to be aesthetically pleasing would likely be an art form in of itself. A good starting point would be interconnected concentric polygonal rings of varying stiffness. Symmetry seems like a must.

For a software implementation, a useful starting point would be CS 176 project 5; a cloth simulator that considers only edge terms is essentially a spring-network simulator. There are many ways to render the output; for example, draw nodes with opacity proportional to velocity, and/or draw edges with opacity proportional to stored energy. Use saturated colors on a black background, and render on top of blurred previous frames for a nice trail effect. Since I've already coded the gnarly math once, I might try to throw this together tomorrow evening, if I don't get distracted by something else.

The variations are really endless. For example, with gravity and stiff (or entirely rigid) edges, you could make a chaos pendulum. By allowing edges to dynamically break and form based on proximity and/or energy, you could get all kinds of dynamic clustering behavior, which might look like molecules forming or something.

A hardware implementation (i.e., actual springs) would be badass in the extreme, although I imagine it would be finicky to set up and tune.

Idea: immersive video with one projector

This is an idea I had while lying in bed listening to Radiohead and hallucinating. (I was perfectly sober, I swear. The Bends is just that damn good.)

Build a frame structure (out of PVC or similar) with the approximate width/depth of a bed, and height of a few feet -- enough that you could comfortably lie on a mattress inside and not feel claustrophobic. Cover every side with white sheets, drawn taut. Mount a widescreen projector directly above the middle of this structure, pointing down. Then hang two mirrors such that the left third of the image is reflected 90 degrees to the left and the right third is reflected 90 degrees to the right (from the projector's orientation), with the middle third projecting directly onto the top of the frame. Then use more mirrors to get the left and right images onto the corresponding sides of the frame. (You'd probably also need some lenses to make everything focus at the same time; this is the only part I'm really iffy on. Fresnel lenses would probably be a good choice. Anyone who knows optics and has any idea how to set this up, please let me know.)

Anyway, the beauty of this setup is that it allows one to control nearly the whole visual field with a single projector and a single video output, thus minimizing complexity and expense. It's not hard to set up OpenGL to render three separate images to three sections of the screen; they could be different viewpoints in the same 3D scene, although as usual I'm more interested in the more abstract uses of this. In particular, you get control over both central and peripheral vision, which has psychovisual importance.

I'm really tempted to build this when I get back to Tech, but there's a high probability that someone else's expensive DLP projector will suffer an untimely demise at the hands of improvised mounting equipment.

Edit: I thought of an even simpler setup that does away with the mirrors and lenses. Make the enclosure a half-cylinder, and project a single widescreen image onto it (orienting left-right with head-feet), correcting for cylindrical distortion in software. The major obstacle here is making a uniformly cylindrical projection surface, but that shouldn't be too hard.


This did not work

Still, the images look pretty. Now I'm going to sleep.