As part of the Evaluation work package (WP6) and more specifically deliverable D6.1 (Assessment report with user feedback and improvements to the prototypes), we have compiled a short dataset from pilot studies on two Early RAPID-MIX prototypes: FreeMix and Multimodal Recognition.
Both prototypes have been evaluated using the procedure presented in D6.1. In order to guarantee the reproducibility of the evaluation procedure for other SMEs and institutions that wish to replicate it or incorporate it into their product development toolchain, we are releasing the data acquired during prototype evaluations openly and without restrictions under a Creative Commons license.
Video– using a handheld video camera
Audio– from the camera’s microphone
- R-IoT data
Raw– (battery voltage, GP switch state, and the raw 9 channels from the motion sensor
Quaternion– 4 channels that contain the quaternion computation results based on Madgwick algorithm
Euler– 3 channels that contain the Euler angles computed with the same algorithm as above
Analog– 2 channels that contain the analog inputs digital conversion
FreeMix evaluation dataset
The FreeMix prototype integrates RAPID-MIX technologies for audio analysis, sound synthesis and motion sensing to re-perform recorded sound and music, using the Freesound database as its main sound material as well as the IRCAM R-IoT motion sensor.
Multimodal recognition evaluation dataset
The Multimodal Recognition prototype targets the creation of interactive generative music experiences based on Interactive Machine Learning (IML)-based gesture recognition. It uses the GiantSteps Music Engines (Harmonic House Filler and Dr. Markov) for real-time algorithmic composition, the XMM library for IML-based gesture recognition, as well as the IRCAM R-IoT motion sensor.
This work is licensed under a Creative Commons Attribution 4.0 International License.