YouTube user canzona uploaded a video to the web, downloaded it, then re-uploaded it one thousand times, gradually eliminating “all human qualities” of his voice and image.
The final video in canzona’s series is a mess of distorted colors and sounds; you can recognize from the motion that the original featured a person talking into the camera, but all individual characters and specifics have been lost, replaced by artifacts created by YouTube’s video and audio compression algorithms.
All 1,000 steps of the process — which took a full year — are available on canzona’s YouTube profile. The project is titled “I Am Sitting in a Video Room,” and it’s inspired by an audio art project recorded by composer Alvin Lucier more than 40 years ago.
In 1969, avant-garde composer Lucier made an audio recording of a brief speech, then played the speech on the speakers and recorded that playback. He repeated this until the acoustics of the room caused some frequencies to overwhelm others more with each recording. The result was a recording that sounded more like a message from another dimension than a man’s voice.
Lucier’s work often focused on the natural, physical properties of sound instead of musical theory. YouTube’s canzona was inspired by Lucier’s work to create “I Am Sitting in a Video Room” — even the words that he speaks are similar to those in Lucier’s original work.
Each time you upload a video to YouTube, that video is compressed by YouTube’s servers so the file size becomes small enough to easily stream to web viewers. Image compression is achieved by recording only changes from one frame to another rather than all of the data in each frame, removing small differences in color that the human eye usually can’t detect while an image is in motion, and through other small modifications to the image. The idea is that these small changes will not result in any dramatic loss in quality to the untrained eye.
However, when you compress the same thing 1,000 times over, it’s like the old adage about making a copy of a copy — the changes pile on top of one another and the image is so corrupted that it becomes unrecognizable. The same goes for audio compression.
Thursday, January 20, 2011
Monday, January 17, 2011
Behavioral Targeting Classifications Read Between The Lines
Understanding consumer intent and knowing how to meet consumer needs seem like the perfect combination to determine the type of ad to serve up in any given scenario. It could become the key that unlocks nirvana for companies tapping behavioral targeting.
A new technology not only provides information on Web site visitors based on how they might think and feel about any given topic, but also their intention, whether or not they will buy or sell, and when. OpenAmplify recently launched Ampliverse, which allows companies to create taxonomies that classify Web content based on their requirements. Knowing who, when, where and why help answer questions on what ads to serve where. While it might be easy to determine the person with a positive view about BMWs should see an ad, it's more complicated to assign a classification and respond to the request for an ad position somewhere on the publisher's site -- which could offer about 600 spaces suitable to run an ad for a car.
Making a classification is just as important as understanding the content. It helps to increase the accuracy of behavioral targeting decisions. Different classification opinions for products and services make it more difficult.
Another technology is the Georgia Tech search engine created by a doctoral student which relies, in part, on machines helping Web sites learn dialect and other vernacular to improve search experiences and performance when language for queries might become unclear or unorthodox.
So, what about integrating the technology in Ampliverse? Let’s say you wanted to determine what type of car a salesperson might sell someone based on what medical articles he reads online. Using taxonomies not only tells the API the content to serve up, but it describes the type of convertible to sell him. Both the understanding and classification provides the flexibility to define a universe. This technology is still young but it has a lot of promise.
A new technology not only provides information on Web site visitors based on how they might think and feel about any given topic, but also their intention, whether or not they will buy or sell, and when. OpenAmplify recently launched Ampliverse, which allows companies to create taxonomies that classify Web content based on their requirements. Knowing who, when, where and why help answer questions on what ads to serve where. While it might be easy to determine the person with a positive view about BMWs should see an ad, it's more complicated to assign a classification and respond to the request for an ad position somewhere on the publisher's site -- which could offer about 600 spaces suitable to run an ad for a car.
Making a classification is just as important as understanding the content. It helps to increase the accuracy of behavioral targeting decisions. Different classification opinions for products and services make it more difficult.
Another technology is the Georgia Tech search engine created by a doctoral student which relies, in part, on machines helping Web sites learn dialect and other vernacular to improve search experiences and performance when language for queries might become unclear or unorthodox.
So, what about integrating the technology in Ampliverse? Let’s say you wanted to determine what type of car a salesperson might sell someone based on what medical articles he reads online. Using taxonomies not only tells the API the content to serve up, but it describes the type of convertible to sell him. Both the understanding and classification provides the flexibility to define a universe. This technology is still young but it has a lot of promise.
Labels:
Demographics
Subscribe to:
Posts (Atom)