Global Sequencer
Ed Carter and Matt Jarvis
Global Sequencer Overview from Matt Jarvis on Vimeo.
via Global Sequencer Overview on Vimeo.
Commissioned by CultureCode, Ed Carter and Matt Jarvis created an audio visual experience from a dataset of their previously commissioned EyeProject.
Using latitude and longitude coordinates, the map is designed to create sound from any location-based dataset. In this beta version, the data represented is animal sighting from the EYE Project where young people geotagged animals in the wild. Melodies are created by playing the coordinates as a step sequencer, with each number relating to a specific pitch. The type of oscillator creating the sound wave relates to the hemisphere in which the coordinate is located. Other layers of audio are created by passing the dataset through simple text-to-speech software, with amplitude of the digital voice controlling pitch, or by using an envelope which allows only percussive elements to be heard. The aim was to create a code which could theoretically be reversed, making it possible to extract the data back from the sounds.
The map can be moved freely, creating a music sequencer where the melodies are created by the dataset, but the order and speed of playback are controlled by a performer.