Auto Tunes
Reactive technology adds an element of real-time to the music experience
Media / 12 Aug 2011
The real-time reflex is a very real phenomenon among Gen Ys. Their expectation that offline experiences keep pace with the nonstop stream of digital content influences the way they consume news, the way they shop, and even the way they experience live entertainment. In turn, avant-garde musicians are experimenting with real-time, reactive technologies to enhance their videos and performances.
ISAM Live
: Following the release of his seventh album, electronic music artist and producer Amin Tobin will kick off an international tour-turned-traveling art installation this month. The ISAM Live experience will feature 3D projection mapping, a much-touted technology that has seen more play in marketing campaigns than in the art world. In Tobin’s concept, large-scale 3D visuals move and morph across a stage set in real-time response to his audio stylings, creating a “visual score” that complements the sounds on stage. (If that reads like techno-babble, check out the show’s extended trailer.) The LA and Brooklyn dates have already sold out, but tickets are still available in other cities.
Chase No Face
: Facial recognition has its share of scary implications, but some are seeing past its creepiness to its potential for both practical purposes (like ID-ing criminals) and purely artistic pursuits. As for the latter, BELL’s music video for “Chase No Face” uses facial recognition to transform singer Olga Bell into a canvas for colored light. Facetracker software analyzed Bell’s facial expressions and, using a hacked Xbox Kinect combined with an LED projector, cast an array of synchronized light patterns onto her face. While this futuristic face paint could easily hit it big with kids, Bell’s example should establish it as a darling of the AV art scene as well.
Music Gloves
: In a premiere performance at TEDGlobal 2011, singer/songwriter Imogen Heap demonstrated a pair of digital “music gloves” that allowed her to compose a song, on the fly, using only hand gestures. The wireless device relies on motion sensors, gyroscopes, and accelerometers, which track hand movements, to stream data to a laptop and interpret it as commands for audio manipulation. Thus, Heap was able to sample sounds by grasping at air, muffle the audio by clasping her hands together, and lower the volume by miming shushing. She intends to integrate the gloves into her shows, and predicts the emergence of apps that let novices create and capture sound through similar gestural interfaces.
©The Intelligence Group