RAPIDMIX
Menu
  • News
  • Features
  • Technologies
  • Team
  • About
  • RAPID-MIX API
  • FRAMEWORK
  • Public Docs
  • CONTACT

User evaluation dataset for the Freemix and Multimodal Recognition prototypes

Home / Public Docs / User evaluation dataset for the Freemix and Multimodal Recognition prototypes

As part of the Evaluation work package (WP6) and more specifically deliverable D6.1 (Assessment report with user feedback and improvements to the prototypes), we have compiled a short dataset from pilot studies on two Early RAPID-MIX prototypes: FreeMix and Multimodal Recognition.

Both prototypes have been evaluated using the procedure presented in D6.1. In order to guarantee the reproducibility of the evaluation procedure for other SMEs and institutions that wish to replicate it or incorporate it into their product development toolchain, we are releasing the data acquired during prototype evaluations openly and without restrictions under a Creative Commons license.

For details about the evaluation procedure, see D6.1. Each item in the dataset consists of a RepoVizz datapack with the following contents:

  • Video – using a handheld video camera
  • Audio – from the camera’s microphone
  • R-IoT data
    • Raw – (battery voltage, GP switch state, and the raw 9 channels from the motion sensor
    • Quaternion – 4 channels that contain the quaternion computation results based on Madgwick algorithm
    • Euler – 3 channels that contain the Euler angles computed with the same algorithm as above
    • Analog – 2 channels that contain the analog inputs digital conversion

FreeMix evaluation dataset

The FreeMix prototype integrates RAPID-MIX technologies for audio analysis, sound synthesis and motion sensing to re-perform recorded sound and music, using the Freesound database as its main sound material as well as the IRCAM R-IoT motion sensor.

Dataset contents:

  • User 1, task 1
  • User 1, task 2
  • User 2, task 1
  • User 2, task 2
  • User 3, task 1
  • User 3, task 2

Multimodal recognition evaluation dataset

The Multimodal Recognition prototype targets the creation of interactive generative music experiences based on Interactive Machine Learning (IML)-based gesture recognition. It uses the GiantSteps Music Engines (Harmonic House Filler and Dr. Markov) for real-time algorithmic composition, the XMM library for IML-based gesture recognition, as well as the IRCAM R-IoT motion sensor.

Dataset contents:

  • User 1
  • User 2

 

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright © 2015, 2016 RAPIDMIX. All Rights Reserved.
Powered By Goldsmiths Digital