Showing posts with label media art. Show all posts
Showing posts with label media art. Show all posts

20101005

BIG UPS!

above (left to right)- cover, illustrations by chris mucci, sal farina

Hello, We Alone! Pardon the intrusion, I got in when my friend Michael left the back door open. I'm currently working on a comic book compiled of diverse artworks by friends (including some stills from Michael's Perceptron!). As the back cover of the book is a collaboration between We Alone and me, Michael and Biff thought it deserved a little bit of an explanation here on We Alone. In terms of political discourse, the following write-up could just possibly be of interest, depending on whether or not you stop staring at Glenn Beck. 



fig.A - Big Ups #1 back cover - "Our Hero" (stop staring.)

"Pop culture used to be like LSD – different, eye-opening and reasonably dangerous. It’s now like crack – isolating, wasteful and with no redeeming qualities whatsoever" --Peter Saville

Big Ups is a way of pushing the context of comics. At first its a simple gesture - email your friends, get some really great art together, put it in a hand made book, release a cd with it, and get it out there one way or the other. On the other hand, the simple, do it yourself nature of it all is more political, I think, then saying, outright, "this is a political gesture". The diversity of the art married to the simple fact that "comics are what make you smile" - what does that mean for kids who want to see something different? For us, I think the unexplainable is way more fun then a one-liner telling you where to go. 

This is where we turn to the back cover, featuring a stylish headshot of our hero, Glenn Beck (see fig. A). It's a simple experiment involving an obviously bad political influence (in the sense that he's simply obtuse), transformed into a mandala-like shrine to obfuscation of priority. That's why its going on the back cover, unexplained, utterly beautiful and strange - it represents a 'beyond' of political discourse. The image itself is simply entertaining, but also confusing as hell. The act of fucking with an image of Glenn Beck (or Bill O' Reilly, or Jon Stewart, for that matter) is like burning an effigy that looks nothing like the subject. Plenty of people want to lay into Our Hero for being an idiot, but personally I don't have time to even concern myself with the nonsense that leaves his mouth. In other words, the mere act of transforming a likeness of Glenn Beck into a psychedelic icon/monster is an exercise in political discourse that aims to make political discourse obsolete. It represents the purpose of Big Ups, or anything anyone makes themselves - it demonstrates that we're doing just fine with or without Our Hero.


Peter Saville's apt quip reverse engineers the idea nicely - essentially we are taking crack-like pop culture and turning it into a moderately valuable form of cultural LSD, in that it is different, it is confusing, it is associated with the practice of creating situations for yourself again, rather than having it mindlessly sold to you.
 fig. B - Mr. Peanut/Mr. Penis. 1969 Zap!#4 cover. by Moscoso

 fig. C - Dyke Pirates Rescue Their Captain from the Diabolic Doctors of Dover. Crank Collingwood, ZAP! 1975.

fig. D - MAROOUFAOLLOU! Hank “Elephant Boy” Longcrank. ZAP! 1975.




Underground comics in the sixties and seventies did this all the time - ZAP! for example (see Fig.B), was famous for it, making drug references seem as common as breathing, and featuring horrifyingly(and entertainingly!) detailed graphic scenes of sexual and/or violent natures (Fig.C), or art that was too confusing and gruesome and difficult to be considered acceptable by the mainstream (Fig.D). It was art that sought beauty where others saw filth. It was art for people who got it - anyone who didn't was encouraged to expand their consciousness to do so, or get the fuck out. MAD magazine/National Lampoon were also famous for doing stuff like that, but they also took the more hard-hitting, easy to understand route, making direct parodic shots at political figures, pop culture icons/events, etc. that the mainstream could latch onto more easily. I'm partial to stuff like ZAP! for the very reason that it made no sense whatsoever. It reminds me that life is a little more complicated than following people like Glenn Beck on their journey towards ultimate stupidity.
(note - please see fig. E for the Big Ups back cover's direct inspiration)  
                                                       
                                                     fig. E - Backcover Comix, Zap 1975


20100228

Drop-Day 2010 Tech

I figured I'd start writing this up on the return flight from Drop Day, so I'm typing here at an odd, cramped, angle from my flight back from LA to Pittsburgh.
Drop-Day was a fine production, a victory for both hobbyist physical computing, and the forces of democratic freedom. I was impressed with the stark giant white cube dance floor with the, as Biff describes, lovecraftian monolith as a centerpiece. Something about the smaller size and the fog machine made people actually want to dance this year. The sensory rooms were excellent, although some of the code running the party never got off the ground. There is something uniquely appealing about a party that crashes, and requires rewriting of computer code and recompiliation on the fly. In addition to projected visuals ( Kanada, Perceptron, Cortex, Live and recorded video feeds, and other trippy renderings ), we had a few Alumni constructed blinkylights. Keegan completed a most excellent glowing octahedron, Suresh completed a rather nice modification of a commercial lamp, and I constructed several more pairs of goggles. I have spent most of my time travelling ( and very little sleeping ) this weekend, and it was well worth it. However, I doubt I'll be traveling back any time in the next five years. Others travel from much further away (London, Fairbanks) to go to this party, which should give you an idea of how important this party is to Dabney alumni.
RGB controlled diffuse illumination lamp :
Suresh successfully modified a modern style diffuse diffuse illumination lamp for controllable RGB color. He even designed and ordered a custom multi-layer board for the thing. I will try to track him down and see if designs and photographs are available anywhere.
CCFL octahedron :
This project was a wire-frame octahedron, approximately two feet on each edge. An octahedron can be viewed as 3 intersecting squares, once for each of the x, y, z, axes. In this design, each axis was assigned a specific color. The octahedron was constructed using two standard cold cathode fluorescent lighting tubes per edge, driven by black-box driving hardware that is powered by 12V DC. 12V is switched to the various edge drivers using darlington arrays controlled by an AtTiny2123(?), with 12V pulled from a modified desktop computer power supply. The skeleton of the octahedron itself was build by cutting wooden dowels to size, drilling a hole through each end, and joining the ends with zip-ties. The lights and driving hardware was also secured to the skeleton via zip ties. A great effect of the hue rotation on tie-dye style patterns is to cause the location of edges to appear to shift as the color changes and alternatively illuminates different parts of the pattern.
Revised goggles :
The goggles you see in these photographs still use the same old LEDs in ping-pong ball design, stripped down and controlled by an AtTiny13a. I would not recommend this design, as technically the chip is unable to source more than 60mA, where the goggles may require up to 120mA. Offhand the AtMega(4,8,16)8 chips are the only ones I can think of that can source sufficient current, and since they can hold more elaborate programs might be a better choice for future designs. Additionally, although the AVR micro-controllers can function at a range of voltages, the nonlinear V-I curve of the LEDs means that attempts to balance the white-point using resistors must be in the context of a well defined voltage ( preferably a constant 20mA current source, but that takes up board space ). Additionally, I was surprised that the internal resistance of coin-cell Cr2023 batteries limits them to approximately 0.3mA continuous draw. Although the much higher mean current draw of ~20mA for the goggles can be supported, this will cause the battery voltage to drop during operation and the LED white-point to drift. Eventually the voltage falls below the operating voltage of the AtTiny. The coin-cells will recover after about ~30 minutes of rebound. We're still seeing some problems with party-durability but hopefully refining the PCB board design and construction can improve on this. Building the goggles is incredibly annoying and I doubt I shall be constructing any more by the old methods for some time. I'm still a bit baffled as to how someone magically managed to repair solder connections and rebuild the connector on one of the goggles in the middle of the party, but ... thats Dabney house for you.
Laser Spirographs and Monolith-Monitor tower with EL wire :
I don't have good documentation on this at the moment, other than this system crashed a lot during the party, but was still super awesome.
Thanks to everyone who made this happen, it was great to see you all again.
 
 


20080326

I should be doing work

Two days behind and I'm just pointing the web-cam back at the computer screen.


20070903

More screenshots

Here's the latest.

Edit: I've added some more screenshots to the gallery, with tasty vertical symmetry imposed by mirroring.


Here I'm trying out some different maps, and also incorporating a camera feed, which is what gives it the more fuzzy, organic look. The geometric patterns with n-way radial symmetry come from z' = z*c, which gives simple scaling and rotation. The squished circles come from z' = sin(real(p) + t) + i*sin(imag(p)), where p = z^2 + c and t is a real parameter.


20070901

More fractal video feedback

I've been working on a new implementation of the fractal video feedback idea. Unlike the previous attempts, the code is nice and modular, so complicated bits of OpenGL hackery get encapsulated in an object with a simple interface. It's still very much a work in progress, but I thought I'd share some results now. Feedback (no pun intended) is very much appreciated.

Video:

Shoving the video through the YouTubes kills the quality. I have some higher quality screenshots in a Flickr gallery. Some of my favorites:




The basic idea is the same as Perceptron: take the previous frame, map it through some complex function, draw stuff on top, repeat. In this case, the "stuff on top" consists of a colored border around the buffer that changes hue, plus some moving polygons that can be inserted by the user (which aren't used in the video, but are in some of the stills). In these examples, the map is a convex combination of complex functions; in the video it's z' = a*log(z)*c + (1-a)*(z2+c). Here z is the point being rendered, z' is the point in the previous frame where we get its color, c is a complex parameter, and a is a real parameter between 0 and 1.

There are two modes: interactive and animated. In interactive mode, c and a are controlled with a joystick (which makes it feel like a flight simulator on acid). The user can also place control points in this (c,a) space. In animated mode, the parameters move smoothly between these control points along a Catmull-Rom spline, which produces a nice C1 continuous curve.

The feedback loop is rendered offscreen at 4096x4096 pixels. Since colors are inverted every time through the loop, only every other frame is drawn to the screen, to make it somewhat less seizuretastic. At this resolution, the system has 48MB of state. On my GeForce 8800GTS I can get about 100 FPS in this loop; by a conservative estimate of the operations involved, this is about 60 GFLOPS. I bow before NVIDIA. Now if only I had one of these...


20070815

Arduino and ultrasonic rangefinders

If you've been following new media art blogs at all, you've probably heard of Arduino. Basically, it puts together an AVR microcontroller, supporting electronics, USB or serial for programming, and easy access to digital and analog I/O pins. The programming is very simple, using a language almost identical to C, and no tedious initialization boilerplate (compare to the hundreds of lines of assembly necessary to get anything working in EE 51). This seems like a no-hassle way to play with microcontroller programming and interfacing to real-world devices like sensors, motors, etc.

Another cool thing I found is the awkwardly named PING))) Ultrasonic Rangefinder. It's a device which detects distance to an object up to 3 meters away. A couple of these strategically placed throughout a room, possibly mounted on servos to scan back and forth, could be used for crowd feedback as we've discussed here previously. They're also really easy to interface to.

Update: I thought of a cool project using these components plus an accelerometer, in a flashlight form factor. The accelerometer provides dead-reckoning position; with rangefinding this becomes a coarse-grained 3d scanner, suitable for interpretive capture of large objects such as architectural elements (interpretive, because the path taken by the user sweeping the device over the object becomes part of the input). I may not be conveying what exactly I mean or why this is cool, but this is mostly a note to myself anyway. So there.


20070725

Extraction of musical structure

I think my next big project will involve automatically extracting structure from music. Mike and I had some discussions about doing this with machine learning / evolutionary algorithms, which produced some interesting ideas. For now I'm implementing some of the more traditional signal-processing techniques. There's an overview of the literature in this paper.

What I have to show so far is this:


This (ignoring the added colors) is a representation of the autocorrelation of a piece of music ("Starlight" by Muse). Each pixel of distance in either the x or y axis represents one second of time, and the darkness of the pixel at (x,y) is proportional to the difference in average intensity between those two points in time. Thus, light squares on the diagonal represent parts of the song that are homogenous with respect to energy.

The colored boxes were added by hand, and represent the musical structure (mostly, which instruments are active). So it's clear that the autocorrelation plot does express structure, although at this crude level it's probably not good enough for extracting this structure automatically. (For some songs, it would be; for example, this algorithm is very good at distinguishing "guitar" from "guitar with screaming" in "Smells Like Teen Spirit" by Nirvana.) An important idea here is that the plot can show not only where the boundaries between musical sections are, but also which sections are similar (see for example the two cyan boxes above).

The next step will be to compare power spectra obtained via FFT, rather than a one-dimensional average power. This should help distinguish sections which have similar energy but use different instruments. The paper referenced above also used global beat detection to lock the analysis frames to beats (and to measures, by assuming 4/4 time). This is fine for DDR music (J-Pop and terrible house remixes of 80's music) but maybe we should be a bit more general. On the other hand, this approach is likely to improve quality when the assumptions of constant meter and tempo are met.

On the output side, I'm thinking of using this to control the generation of flam3 animations. The effect would basically be Electric Sheep synced up with music of your choice, including smooth transitions between sheep at musical section boundaries. The sheep could be automatically chosen, or selected from the online flock in an interactive editor, which could also provide options to modify the extracted structure (associate/dissociate sections, merge sections, break a section into an integral number of equal parts, etc.) For physical installation, add a beefy compute cluster (for realtime preview), an iPod dock / USB port (so participants can provide their own music), a snazzy touchscreen interface, and a DVD burner to take home your creations.


20070718

Simple DIY multitouch interfaces

Multitouch interfaces are surprisingly easy to make. Here's a design using internal reflection of IR LED light in acrylic, and here's an extremely simple and clever design using a plastic bag filled with colored water. Minority Report here we come.


OpenCV : open-source computer vision

OpenCV is an open source library from Intel for computer vision. To quote the page,

"This library is mainly aimed at real time computer vision. Some example areas would be Human-Computer Interaction (HCI); Object Identification, Segmentation and Recognition; Face Recognition; Gesture Recognition; Motion Tracking, Ego Motion, Motion Understanding; Structure From Motion (SFM); and Mobile Robotics."

Sounds like some of this could be pretty useful for interactive video neuro-art, or whatever the hell it is we're doing.


20070715

Whorld : a free, open-source visualizer for sacred geometry

From the homepage:

"Whorld is a free, open-source visualizer for sacred geometry. It uses math to create a seamless animation of mesmerizing psychedelic images. You can VJ with it, make unique digital artwork with it, or sit back and watch it like a screensaver."


20070530

What is Circuit Bending?

What is circuit bending?

An introduction to a strange and wonderful hobby.


20070517

Botborg: more video feedback art

'Botborg is a practical demonstration of the theories of Dr Arkady Botborger (1923-81), founder of the 'occult' science of Photosonicneurokineasthography - translated as "writing the movement of nerves through use of sound and light". Botborg claim that sound, light, three-dimensional space and electrical energy are in fact one and the same phenomena, and that the capacity of machines to alter our neural impulses will bring about the next stage in human evolution.'

I like the concept, but it's kinda ugly. In a pure visual-aesthetics sense, I think we could do better. On the flip side, it's also nicely disturbing (it makes some nice growling noises).

On a related note, do any of you know anything about applying to art schools in new-media art? I have some idea of which schools I'd apply to, but no idea how to convince them to take me seriously (or indeed how to convince myself to take me seriously).


20070506

Flock: for saxophone quartet, audience participation, and video

This work incorporates several of the ideas we discussed here previously, relating to crowd feedback and such. Looks pretty cool.


20070423

More on crowd feedback

Everyone has cellphones now, right? If you had a few highly directional antennae you might be able to use the amount of RF activity in a few cellphone bands as an approximation to crowd activity. You could maybe also look for Bluetooth phones and maybe remember individual Bluetooth ID's, although I'm not sure if most phones will respond to Bluetooth probes in normal operation.

Another approach would be suitable for a conference or other event where participants have badges. Simply put an RFID tag in each badge and have a short-range transceiver near each display. Now the art not only responds to aggregate preferences, but it also personalizes itself for each viewer. Effects which have previously held a participant's attention will appear more often when that participant is nearby. This will probably result in overall better evolutionary art -- instead of trying to impress the whole crowd, which is noisy and fickle, the algorithm tries to impress every individual person. While it's impressing one group, other people may be attracted in, and this constitutes more upvotes for that gene.

I think one important feature for this to work effectively is a degree of temporal coherence for a given display. If they're each showing short, unrelated clips (like Electric Sheep does), then people will not move around fast enough for the votes to be meaningful. Rather, each display should slowly meander through the parameter space, displaying similar types of fractal for periods on the order of 10 minutes (though of course there may be rapidly-varying animation parameters as well; these would not be parameters in the GA, though their rates of change, functions used, etc. might be).


20070421

Idea : crowd feedback

This is another idea relating to video feedback systems. Imagine an exhibition of a system like Perceptron on several monitors throughout a gallery space. A set of cameras watches the crowd from above, and uses simple frame differencing and motion detection algorithms to determine a map of activity level across the space. This then feeds into the video system; perhaps each region of the space is associated with some IFS function, seed point, or color, and the activity level in that region determines how prominently that feature affects the images.

Each monitor can display a different view of the same overall parameter space, so at any given time there will be some "interesting" and some "boring" monitors. Viewers are naturally drawn towards more "interesting" (by their own aesthetic senses) monitors, and in moving to get a better look they affect the whole system. In essence, the aesthetic preferences of the viewers (now participants) become another layer of feedback.

If Hofstadter is right, and "strange loops" give rise to conscious souls, then should the participants in such an exhibition be considered as having (slightly) interconnected souls? If so, how does the effect compare in magnitude to the interconnectedness we all share through the massive feedback loop of the everyday world? Does this effect extend to the artist who makes the system and then sits back and watches passively? What about the computer that's making it all happen? Is any of this actually "strange" enough to be considered a strange loop? All of these questions seem fantastically hard to answer, but it's a lot of fun to think about.


Idea : the laser glove

This is an idea I had for a simple, dirt-cheap glove-based input device, for intuitive control of audio/video synthesizers like Perceptron and such. It consists of a glove with a commodity red laser pointer oriented along each finger. This allows the user to control the shape, size, and orientation of a cluster of laser dots on a screen. A webcam watches the screen and uses the dots to control software.

The software could do any of a number of things. One approach would be to fit the dot positions to a set of splines, then use properties of these splines such as length, average direction, curvature, etc. as input parameters to the synthesizer system. At Drop Day we had a lot of fun pointing lasers at feedback loops, and that was without any higher-level processing.

Laser pointers are now less than $1.00 each, bought in bulk on eBay. (I have several dozen in my room right now.) I don't know of a good way to differentiate one dot or user from another without adding a lot of complexity, but I think cool effects could be had by just heuristically detecting dot formations. The emergent behavior from multiple interacting users might be desired, anyway.

On a slightly related note, here's some guy's MIT Media Lab thesis that I found while browsing the Web one fine day: Large Group Musical Interaction using Disposable Wireless Motion Sensors


20070416

I'm on the Interblag!

I finally got around to writing something for this blog. First, I have a quotation from our beloved Admiral:

"Do you want this fire extinguisher? No? How about this tag that says 'do not remove under penalty of fire marshal'?"

Second, the Perceptron screenshots on here look really damn good, and I decided I should post some of what I've been working on lately as well. This started yesterday as a CS 101 assignment and kind of got out of control.






There's also a movie here of me playing with it. It's 56M and still looks shitty due to compression, but you can at least get the idea. May cause seizures in a small percentage of the population.

Update: The video is (maybe) on YouTube now: