Showing posts with label input. Show all posts
Showing posts with label input. Show all posts

20070815

Arduino and ultrasonic rangefinders

If you've been following new media art blogs at all, you've probably heard of Arduino. Basically, it puts together an AVR microcontroller, supporting electronics, USB or serial for programming, and easy access to digital and analog I/O pins. The programming is very simple, using a language almost identical to C, and no tedious initialization boilerplate (compare to the hundreds of lines of assembly necessary to get anything working in EE 51). This seems like a no-hassle way to play with microcontroller programming and interfacing to real-world devices like sensors, motors, etc.

Another cool thing I found is the awkwardly named PING))) Ultrasonic Rangefinder. It's a device which detects distance to an object up to 3 meters away. A couple of these strategically placed throughout a room, possibly mounted on servos to scan back and forth, could be used for crowd feedback as we've discussed here previously. They're also really easy to interface to.

Update: I thought of a cool project using these components plus an accelerometer, in a flashlight form factor. The accelerometer provides dead-reckoning position; with rangefinding this becomes a coarse-grained 3d scanner, suitable for interpretive capture of large objects such as architectural elements (interpretive, because the path taken by the user sweeping the device over the object becomes part of the input). I may not be conveying what exactly I mean or why this is cool, but this is mostly a note to myself anyway. So there.


20070718

Simple DIY multitouch interfaces

Multitouch interfaces are surprisingly easy to make. Here's a design using internal reflection of IR LED light in acrylic, and here's an extremely simple and clever design using a plastic bag filled with colored water. Minority Report here we come.


OpenCV : open-source computer vision

OpenCV is an open source library from Intel for computer vision. To quote the page,

"This library is mainly aimed at real time computer vision. Some example areas would be Human-Computer Interaction (HCI); Object Identification, Segmentation and Recognition; Face Recognition; Gesture Recognition; Motion Tracking, Ego Motion, Motion Understanding; Structure From Motion (SFM); and Mobile Robotics."

Sounds like some of this could be pretty useful for interactive video neuro-art, or whatever the hell it is we're doing.


20070503

More on VR for consciousness hacking

I was talking to Biff today about uses for various senses in the VR consciousness hacking idea. It occurred that smell is very low-bandwidth, but strongly tied to memory, and thus might be useful for maintaining state across multiple sessions.

Also, apparently Terence McKenna was also interested in using VR for similar purposes. I'm not sure if that makes the idea more or less credible.

In other news, the laser glove is about 80% done; all I need to do is wire it up. I need to talk to some sort of EE person about how to do this without exploding the lasers from overcurrent.


20070423

More on crowd feedback

Everyone has cellphones now, right? If you had a few highly directional antennae you might be able to use the amount of RF activity in a few cellphone bands as an approximation to crowd activity. You could maybe also look for Bluetooth phones and maybe remember individual Bluetooth ID's, although I'm not sure if most phones will respond to Bluetooth probes in normal operation.

Another approach would be suitable for a conference or other event where participants have badges. Simply put an RFID tag in each badge and have a short-range transceiver near each display. Now the art not only responds to aggregate preferences, but it also personalizes itself for each viewer. Effects which have previously held a participant's attention will appear more often when that participant is nearby. This will probably result in overall better evolutionary art -- instead of trying to impress the whole crowd, which is noisy and fickle, the algorithm tries to impress every individual person. While it's impressing one group, other people may be attracted in, and this constitutes more upvotes for that gene.

I think one important feature for this to work effectively is a degree of temporal coherence for a given display. If they're each showing short, unrelated clips (like Electric Sheep does), then people will not move around fast enough for the votes to be meaningful. Rather, each display should slowly meander through the parameter space, displaying similar types of fractal for periods on the order of 10 minutes (though of course there may be rapidly-varying animation parameters as well; these would not be parameters in the GA, though their rates of change, functions used, etc. might be).


20070422

idea : VR for consciousness hacking

Ooh, interpolating tessellations is an awesome idea. You'd basically have to interpolate under a constraint, that some parts of the spline line up with other parts. But since this constraint is satisfied at all reference points, I think it would be doable.

I've been thinking lately about virtual reality as a tool for consciousness hacking. VR as played out in the mid-90's was mostly about representing realistic scenes poorly and at great expense. But I think we can do a lot with abstract (possibly fractal-based) virtual spaces, and the hardware is much better and cheaper now. The kit I'm imagining consists of:

  • 3D stereoscopic head-mounted display with 6DOF motion tracker (like this)
  • High-quality circumaural headphones (like these)
  • Homemade EEG (like this)
  • Possibly other biofeedback devices (ECG, skin resistance, etc.)
  • Intuitive controllers (e.g. data glove like this, camera + glowing disks for whole-body motion-tracking, etc.)
  • A nice beefy laptop with a good graphics card
  • Appropriate choice of alphabet soup and related delivery mechanism, if desired
  • A locking aluminum equipment case with neat foam cutouts for all of the above
With the right software this can obviously do a great many things. For example, I've found that after experimenting with a graphics effect for a while, I develop the ability to hallucinate the same effect. With more control over the training period it might be possible to train more complicated effects, determine how much computation versus playback of prerecorded samples is going on at "runtime", and determine on what level(s) of abstraction the hallucinated data manifests. Of course, for actual scientific results we'd need to duplicate the experiments over many people, but personally I'm more interested in hacks that give me greater access to and understanding of my own mind.


20070421

Idea : the laser glove

This is an idea I had for a simple, dirt-cheap glove-based input device, for intuitive control of audio/video synthesizers like Perceptron and such. It consists of a glove with a commodity red laser pointer oriented along each finger. This allows the user to control the shape, size, and orientation of a cluster of laser dots on a screen. A webcam watches the screen and uses the dots to control software.

The software could do any of a number of things. One approach would be to fit the dot positions to a set of splines, then use properties of these splines such as length, average direction, curvature, etc. as input parameters to the synthesizer system. At Drop Day we had a lot of fun pointing lasers at feedback loops, and that was without any higher-level processing.

Laser pointers are now less than $1.00 each, bought in bulk on eBay. (I have several dozen in my room right now.) I don't know of a good way to differentiate one dot or user from another without adding a lot of complexity, but I think cool effects could be had by just heuristically detecting dot formations. The emergent behavior from multiple interacting users might be desired, anyway.

On a slightly related note, here's some guy's MIT Media Lab thesis that I found while browsing the Web one fine day: Large Group Musical Interaction using Disposable Wireless Motion Sensors