Showing posts with label hardware. Show all posts
Showing posts with label hardware. Show all posts

20100824

Newsy Tuesday

In lieu of actual content (maybe later), I'd like to post something cool from Singularity Hub.

vOICE is an innovative augmented reality system that lets people see with sound. Images from cameras are translated into tones, which are played through headsets. The system has surpisingly good fidelity, and implications for brain research.



What’s amazing about the system – and what makes it a hot item for neuroscience research – is that it appears to restore the actual subjective experience of vision (visual qualia) to blind users, rather than just teaching them to correlate objects and sounds. Users have reported the return of experiences like depth and the sense of empty space in their environment. The restored vision is not the same as normal visual experience – one user described it as being comparable to an old black and white film, while others report vague impressions of objects as shades of grey. Research is now underway to understand how the vOICe system might be rewiring the brain to achieve this effect.


Holy crap, we can see with sound. No inplants required, no electrode arrays on the tongue, just off the self tech and clever algorithms.


20070815

Arduino and ultrasonic rangefinders

If you've been following new media art blogs at all, you've probably heard of Arduino. Basically, it puts together an AVR microcontroller, supporting electronics, USB or serial for programming, and easy access to digital and analog I/O pins. The programming is very simple, using a language almost identical to C, and no tedious initialization boilerplate (compare to the hundreds of lines of assembly necessary to get anything working in EE 51). This seems like a no-hassle way to play with microcontroller programming and interfacing to real-world devices like sensors, motors, etc.

Another cool thing I found is the awkwardly named PING))) Ultrasonic Rangefinder. It's a device which detects distance to an object up to 3 meters away. A couple of these strategically placed throughout a room, possibly mounted on servos to scan back and forth, could be used for crowd feedback as we've discussed here previously. They're also really easy to interface to.

Update: I thought of a cool project using these components plus an accelerometer, in a flashlight form factor. The accelerometer provides dead-reckoning position; with rangefinding this becomes a coarse-grained 3d scanner, suitable for interpretive capture of large objects such as architectural elements (interpretive, because the path taken by the user sweeping the device over the object becomes part of the input). I may not be conveying what exactly I mean or why this is cool, but this is mostly a note to myself anyway. So there.


20070627

Idea: immersive video with one projector

This is an idea I had while lying in bed listening to Radiohead and hallucinating. (I was perfectly sober, I swear. The Bends is just that damn good.)

Build a frame structure (out of PVC or similar) with the approximate width/depth of a bed, and height of a few feet -- enough that you could comfortably lie on a mattress inside and not feel claustrophobic. Cover every side with white sheets, drawn taut. Mount a widescreen projector directly above the middle of this structure, pointing down. Then hang two mirrors such that the left third of the image is reflected 90 degrees to the left and the right third is reflected 90 degrees to the right (from the projector's orientation), with the middle third projecting directly onto the top of the frame. Then use more mirrors to get the left and right images onto the corresponding sides of the frame. (You'd probably also need some lenses to make everything focus at the same time; this is the only part I'm really iffy on. Fresnel lenses would probably be a good choice. Anyone who knows optics and has any idea how to set this up, please let me know.)

Anyway, the beauty of this setup is that it allows one to control nearly the whole visual field with a single projector and a single video output, thus minimizing complexity and expense. It's not hard to set up OpenGL to render three separate images to three sections of the screen; they could be different viewpoints in the same 3D scene, although as usual I'm more interested in the more abstract uses of this. In particular, you get control over both central and peripheral vision, which has psychovisual importance.

I'm really tempted to build this when I get back to Tech, but there's a high probability that someone else's expensive DLP projector will suffer an untimely demise at the hands of improvised mounting equipment.

Edit: I thought of an even simpler setup that does away with the mirrors and lenses. Make the enclosure a half-cylinder, and project a single widescreen image onto it (orienting left-right with head-feet), correcting for cylindrical distortion in software. The major obstacle here is making a uniformly cylindrical projection surface, but that shouldn't be too hard.


20070428

RepRap : a self-replicating rapid prototyper

RepRap is a project to make a rapid prototyping machine (aka 3D printer) which can build most of its own parts, with a total cost of under $500. There are already several working prototypes, and they "hope to announce self-replication in 2008".

"RepRap etiquette asks that you use your machine to make the parts for at least two more... for other people at cost."

If this achieves the exponential growth that they're obviously aiming for, it will enable open source distributed development of physical objects (including of course itself), which would be nothing short of revolutionary.

And their canonical test object is a shotglass.