Hi! RAPID-MIX API is an easy-to-use toolkit to make sensor integration, DSP for audio and multimodal interaction, and interactive machine learning accessible for artists, designers, makers, educators, and beginners, as well as creative companies and independent developers.
Our toolkit has been evolving! Here's its current state:
RapidLib allows user to write simple code for using Machine Learning models.
Wekinator for RAPID-MIX allows users to use Machine Learning models with an interactive workflow.
PiPo-SDK and Waves LFO - APIs with a simple plugin architecture for processing streams of multi-dimensional data, such as audio, audio descriptors, or gesture and motion data. Users modular processing with LFO operators.
XMM - provides 3 main components:
- the XMM-Node is a node.js module for real-time training of Markov models from gesture recordings, pose and gesture recognition, gesture following, and generation of control parameters through regression.
- the XMM-Client is a library for formatting gesture recordings and loading models trained by the Node component to perform real-time gesture classification, regression and following
- the XMM-LFO component that wraps XMM-Client classes into LFO operators
Repovizz API v.2 - Online repository for storage, retrieval and processing of multimodal datasets.
Freesound API - Collaborative database of Creative Commons licensed sounds.
You also get CodeCircle, a collaborative online live coding environment and a vibrant community with which to learn, exchange and develop your knowledge, and push your innovations to some of the coolest music tech companies!
Have a look at our Get Started and Walkthrough to know more about them.