For this project, we were tasked to develop a sensor that could emit sounds created by the movements of the human body. It was our job to take this audio that was being generated by the sensor, and give an interpretive performance from the audio data.
My group consisted of 3 other members: Brooke Neal, Ziyang Qui, and Danny Henderson. While we all helped one another on each aspect of the project, mine and Brooke’s main focus was building the sensors, as well as participating in the actual performance of the piece. Ziyang was able to create some software on Max that generated video output based on our audio data, and she was also willing to join in on the actual performance of the piece. Danny was our sound processor. He also worked to develop software for the audio output. During the performance, Danny overlooked the software and manipulated the sound from our body movements.
I really enjoyed working with these sensors because I would have never thought of using this type of biological data in this type of way. We learned that everyone’s body type generated different sounds because everyone is built differently. During our performance, we switched the sensor from person to person because we found that was the best technique to produce unique outputs.