Experimental session with sounds from the cloud...
Experimental session with sounds from the cloud With a voice talking about the process. https://github.com/sonidosmutantes/apicultor Sounds are retrieved in real-time defining values from a dynamic user interface. This mean, for example, its widgets and values could be rearranged during a live performance. In this demonstration, a standard MIDI controller is sending messages to manage the values of many MIR descriptors like spectral complexity, spectral centroid, high-frequency content, duration, BPM, and others. Then, the apicultor engine locates the desired sound in one of the pre-configured online databases. Also, the controller manages the specially developed synthesizer in Supercollider. For example, the “drone” sound is achieved selecting samples with the desired spectral properties (mainly center of the mass and complexity) and freezing it. With granular synthesis and live-coding are some of the techniques involved in the process. Here the mentioned sounds from the web, which were shared with (free) creative commons licenses are retrieved one by one. Unknown sounds of unknown people who recorded them for unknown reasons are combined, using independent process techniques for each one. The result is a live experimentation which brings the opportunity to the public of enhancing their conscience and perspective about the Cloud, the massive content sharing, free licenses, mixing and recycling.
Channels: 2 | Samplerate: 48000 | Genre: Electronic | Instrument: Electronic
"Experimental session with sounds from the cloud + voice" by Hernán Ordiales is licensed under a Creative Commons Attribution-ShareAlike 4.0 International. Published: Nov. 16, 2017, 9:14 a.m. by Hernán Ordiales. Permanent link.
Comments for 'Experimental session with sounds from the cloud + voice' track:
No comments yet