News

AGILE and the RAPID-API

AGILE and the RAPID-API

We are excited to announce that we are working with AGILE, a European Union Horizon 2020 Research Project developing and building a modular software-and-hardware gateway for the Internet of Things (IoT).

We will be combining the RAPID-API with the modular hardware of the AGILE project. Their smart hardware provides a multitude of ways to interface with the real world through sensors, and the Rapid-API will give people ways to harness and use that real-world data.

You’ll be able to see some of the results of this collaboration at Adaptation, a media arts event organised by AGILE. Artists are invited to submit proposals to Adaptation for works which represent data through audiovisual experiences, and selected artists will work with technologists to realise their ideas. We will be working with AGILE to have the RAPID-API available to help this realisation. Find out more about Adaptation and how to get involved here.

Find out more about AGILE here.

agile

BITalino at Smartgeometry 2016

BITalino at Smartgeometry 2016

BITalino, the board developed by Rapid Mix partners PLUX which gives quick and easy access to biosignals, featured in an installation by the Atmospheric Delight team at Smartgeometry 2016 in Sweden, a workshop and conference exploring novel technologies. Read about it in more detail in Architect Magazine:

http://www.architectmagazine.com/technology/smartgeometry-2016-a-path-to-interpersonal-and-interplanetary-connections_o

There’s also a video showing for a deeper look:

Smartgeometry 2016: Atmospheric Delight from Marc Webb on Vimeo.

It’s great to see key Rapid Mix technologies being used for prototyping within the creative industries!

RAPID-MIX tools in Google ATAP Project Soli Developers workshop

RAPID-MIX tools in Google ATAP Project Soli Developers workshop

Francisco Bernardo from Goldsmiths EAVI team was recently invited by Google ATAP team to Mountain View, California, to participate on Project Soli developers workshop.

As part of the Soli Alpha Developers program, Francisco demonstrated the successful integration of the Soli sensor with some of the RAPID-MIX tools such as Wekinator and Maximilian, for the purpose of using gesture recognition for the control of music processes.

Workshop in “Musical Gesture as Creative Interface” Conference

Workshop in “Musical Gesture as Creative Interface” Conference

On the past March 16th, Goldsmiths EAVI team Atau Tanaka and Francisco Bernardo delivered a workshop on “Interactive Applications in Machine Learning” to the attendees of the International Conference on “Musical Gesture as Creative Interface” in Porto.

About 15 participants engaged with a hands on approach to electromyography and machine learning for music making using RAPID-MIX tools BITalino and Wekinator, and Max for biosignal conditioning and signal sonification using several synthesis and mapping approaches.

The workshop went all the way to a very successful end, with many of the participants achieving the full setup and engaging in joint experimentation with music making out of biosignals.

Wekinator & Micro Bit on BBC News

Wekinator & Micro Bit on BBC News

Goldsmiths’ Rebecca Fiebrink demonstrates her work with RAPID-MIX technology Wekinator in one of these “Seven outstanding Micro Bit projects.”

http://www.bbc.com/news/technology-35824446

BITalino in Smart Automobile commercial

BITalino in Smart Automobile commercial

BITalino is used to implement a lie detector in this commercial for Smart Automobile:

RAPID MIX at Stanford University

RAPID MIX at Stanford University

Michael Zbyszyński from the RAPID-MIX team did a talk at CCRMA (the Centre for Computer Research in Music and Acoustics) at Stanford University, long-term pioneers in music computing (and prior home of contemporary electronic music superstar Holly Herndon as well as our own Atau Tanaka)

The audience  included Thomas Rossing, John Chowning and Matt Wright. It was a chance to show some of RAPID-MIX technologies to some legendary figures in electronic music.

The Wekinator, Ableton Live and the LOOP summit

The Wekinator, Ableton Live and the LOOP summit

The Wekinator, a highly usable piece of software that helps you incorporate gestural interaction into artistic projects, got a nice mention on the Ableton blog. The post was about the recent Loop summit which was exploring the cutting edge of music tech, and it rightly identifies the potential for developing new musical instruments with the Wekinator. The post also mentions RAPID MIX partners IRCAM and their CoSiMa project. Read more here: https://www.ableton.com/en/blog/new-frontiers-of-music-tech/

The Wekinator is a core RAPID MIX technology, developed by team member Rebecca Fiebrink. Behind the user friendly interface is a wealth of highly sophisticated machine learning technology that will be at the heart of many future RAPID MIX projects.

If you’ve not used Wekinator for your gestural music or art, you should have a play with it. Make sure you’ve got the latest version of Wekinator here, and keep us informed about what you’re doing with it. We’d love to hear about any projects using the Wekinator with Ableton. You can also still sign up for the online course “Machine Learning for Musicians and Artists”, taught by Rebecca, here.

FiebrinkPhoto2

RAPID MIX at Barcelona Music Hackday

RAPID MIX at Barcelona Music Hackday

We had the pleasure of attending the Barcelona Music Hackday this year. We made this short video about our visit:

 

RAPID MIX First Year Technology Prototypes

RAPID MIX First Year Technology Prototypes

Here we have collected videos of the prototypes we’ve been developing over the first year of RAPID MIX. View them as a playlist on YouTube:

New Wekinator and Online Course in Machine Learning

New Wekinator and Online Course in Machine Learning

RAPID MIX team member Rebecca Fiebrink has launched a new version of the machine learning software, Wekinator, an incredibly powerful yet user friendly toolkit for bringing expressive gestures into your music, art, making, and interaction design. This is a major new version that includes dynamic time warping alongside new classification and regression algorithms. You can download it here (for mac/windows/linux) along with many new examples for connecting it to real-time music/animation/gaming/sensing environments: www.wekinator.org. The technology that drives Wekinator will feature in many RAPID MIX products that you’ll be hearing about in the future.

The launch coincides the launch of a free online class run by Rebecca, Machine Learning for Musicians and Artists: https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists/infoIf you’re interested in machine learning for building real-time interactions,  sign up! No prior machine learning knowledge or mathematical background is necessary. The course is probably most interesting for people who can already program in some environment (e.g., Processing, Max/MSP) but should still be accessible to people who don’t. The course features two guest lecturers, music technology researcher Baptiste Caramiaux and composer/instrument builder/performer Laetitia Sonami.
Lecture topics will include:
• What is machine learning?
• Common types of machine learning for making sense of human actions and sensor data, with a focus on classification, regression, and segmentation
• The “machine learning pipeline”: understanding how signals, features, algorithms, and models fit together, and how to select and configure each part of this pipeline to get good analysis results
• Off-the-shelf tools for machine learning (e.g., Wekinator, Weka, GestureFollower)
• Feature extraction and analysis techniques that are well-suited for music, dance, gaming, and visual art, especially for human motion analysis and audio analysis
• How to connect your machine learning tools to common digital arts tools such as Max/MSP, PD, ChucK, Processing, Unity 3D, SuperCollider, OpenFrameworks
• Introduction to cheap & easy sensing technologies that can be used as inputs to machine learning systems (e.g., Kinect, computer vision, hardware sensors, gaming controllers)
Kurv Guitar on Sky News

Kurv Guitar on Sky News

We’re excited to announce more media coverage of the Kurv Guitar, one of the first commercial products to be powered by RAPID MIX technologies, on Sky News “Swipe” show, as part of a special on new musical instruments. You can watch it here:

http://news.sky.com/story/1627847/virtual-instrument-makes-air-guitar-a-reality

The Kurv Guitar uses our own hardware design and bluetooth firmware, along with RAPID MIX gesture recognition and synthesis technologies, such as team member Mick Grierson’s Maximilian library.

 

ROLI releases NOISE: a free app that turns the iPhone into an expressive musical instrument

ROLI releases NOISE: a free app that turns the iPhone into an expressive musical instrument

It’s certainly a day for exciting news at RAPID MIX!

RAPID MIX partners ROLI, builders of the beautiful Seaboard and owners of JUCE, have released an iPhone app that turns your phone into a truly expressive musical instrument.

Get NOISE from the App Store

NOISE is the most full-bodied instrument to ever hit the glass surface of an iPhone. It turns the iPhone screen into a continuous sonic surface that responds to the subtlest gestures. Taking advantage of the new 3D Touch technology of the iPhone 6s, the surface lets music-makes shape sound through Strike, Press, Glide, Slide, and Lift – the “Five Dimensions of Touch” that customers and critics have celebrated on ROLI’s award-winning Seaboard RISE.

As an instrument NOISE features 25 keywaves, includes 25 sounds, and has five faders for fine-tuning the touch-responsiveness of the surface. It is now available for music-makers of any skill level who want to explore a multidimensionally expressive instrument that fits in their pockets.

NOISE is also the ultimate portable sound engine for the Seaboard RISE and other MIDI controllers. Powered by Equator and using MIDI over Bluetooth, NOISE lets music-makers control sounds wirelessly from their iPhones. It is one of the first apps to enable Multidimensional Polyphonic Expression (MPE) through its MPE Mode, so the NOISE mobile sound engine works with any MPE-compatible controller. ROLI’s sound designers have crafted the app’s 25 preset sounds – which include Breath Flute and Extreme Loop Synth – especially for MPE expressivity. Additional sounds can be purchased in-app.

With a Seaboard RISE, a Flip Case, and NOISE on their iPhones, music-makers now have a connected set of portable tools for making expressive music on the go.

While it works with all models of iPhone from the iPhone 5 to the iPhone 6s, NOISE has been optimized to take full advantage of 3D Touch on the iPhone 6s.

 

Kurv Guitar Kickstarter Success!

Kurv Guitar Kickstarter Success!

We wrote earlier about the Kurv Guitar, a project powered by RAPID MIX technologies (you can read the original post here).

Today we’re very pleased to announce that it’s reached its funding goal, with 36 days still left to go!

Have a look at the Kickstarter here.

At its heart is a powerful combination of RAPID MIX gesture recognition and synthesis technologies (such as Mick Grierson’s Maximilian library) along with our own hardware design and bluetooth firmware.

 

PLUX releases the SnapBITs for BITalino

PLUX releases the SnapBITs for BITalino

IMG_8358
BITalino makers PLUX are one of the key industry partners working on the RAPID-MIX project, producing “Arduino”-like technologies specialised on body sensing and bio-hacking wearables prototyping. The RAPID-API will give multiple ways to use this information in new, creative ways.
As a result of the RAPID-MIX presence at the Barcelona Hack Day 2015, where hackers highlighted usability concerns associated with the typical wires, the team at BITalino has assimilated this feedback to release the new SnapBITS – “getting rid of the pesky electrode cables to make life easier for BITalino users”.

IMG_8357Check out their website for more information, and stay tuned to the RAPID-MIX blog to find out how we can help you creatively use these exciting technologies:
Michael Zbyszyński joins the RAPID-MIX team!

Michael Zbyszyński joins the RAPID-MIX team!

We’re really excited to announce that Michael Zbyszyński has joined the RAPID-MIX team and relocated to Goldsmiths. He has a background as a musician, maker and coder, working at places inside and outside of academia including Avid, Cycling ’74 and CNMAT.
See his full bio on the Team page.

RAPID-MIX at the JUCE Summit

RAPID-MIX at the JUCE Summit

RAPID-MIX team member and Goldsmiths reader in creative computing Mick Grierson was present at the JUCE summit on 19-20th November, alongside representatives from Google, Native Instruments, Cycling ’74 and other key music industry players. Mick is Innovation Manager for RAPID-MIX, running the whole project as well as being a key developer of MIX technologies.

http://www.juce.com/summit#overview

Even if you haven’t heard of JUCE, chances are you’ve used it without knowing. It is an environment in which you can build plug-ins, music software and more, and is perhaps best known as the environment in which Cycling 74’s Max MSP is built and runs. It’s ever present in the background of a lot of audio software. The summit brought together key technologists in the music industry, and it was a pleasure and an honour to have a RAPID MIX presence amongst them

Mick spoke between David Zicarelli (from Cycling 74) and Andrew Bell (Cinder), presenting his C++ audio DSP engine Maximilian, describing how it integrates with JUCE, Cinder, Open Frameworks and other tools. Maximilian is an incredibly powerful, lightweight audio engine which will form a key part of the RAPID API we will be launching soon. If you want to dig deeper, check out the Maximilian library on Github here:

https://github.com/micknoise/Maximilian

JUCE is now owned by RAPID-MIX partners ROLI, and we’re excited about future developments involving RAPID-MIX, Juce and the creative music industry partners present at the summit. Stay tuned for more news!

Kurv Guitar Launch

Kurv Guitar Launch

We are delighted to announce the launch of the Kickstarter for the Kurv guitar. This is the first of many projects and products that are powered by RAPID MIX technologies.

It’s the RAPID MIX gesture recognition and synthesis technologies, along with our own hardware design and bluetooth firmware, that make projects like the Kurv Guitar possible. We expect this to be the first of many musical interfaces, ranging from games to serious musical instruments, that help turn the human body into an expressive musical tool using RAPID MIX technologies. Check out their Kickstarter below:

Getting ready for the Co-Design workshop!

Getting ready for the Co-Design workshop!

Just about to start our first Co-Design workshop for the MusicHackDay at Pompeu Fabra!

RAPID-MIX in Music Hack Day, Barcelona!

RAPID-MIX in Music Hack Day, Barcelona!

RAPID-MIX will be participating in the Music Hack Day, Barcelona from June 17th to the 19th. We will be conducting a workshop for participatory design and showcasing some of the technologies that we are developing in talks and performances. Come and join us and have a hands on approach with the new cutting edge technologies for music!

Second Meeting at IRCAM, Paris

Second Meeting at IRCAM, Paris

The next RAPID MIX meeting will be at IRCAM in Paris from the 20th – 22nd May.

This meeting is to plan and prepare for the workshops and user centred design sessions we will be doing before and during the Barcelona Hackday. We will be reviewing the different technologies that the partners can bring for people to work with, think about some early prototpyes and prepare strategies and challenges for the hackers. We’re excited about bringing these different technologies together and start putting them in the hands of artists, musicians, programmers and who are going to be using them.

Barcelona Kickoff Meeting!

Barcelona Kickoff Meeting!

We are in Barcelona at the Music Technology group, UPF for our long awaited kick off meeting for our new EC Horizon2020 funded ICT project, RAPID MIX. This is an initial meeting for all the partners to meet each other, discuss the different work packages that constitute the project, and start to get inspired by sharing research, products and technologies.

 

Nice to feel welcome with wayfinding

 

 

 

 

 

 

 

 

 

Nice to feel welcome with wayfinding!

 

Sergi Jorda (Music Technology Group) reminds us of project objectives

 

 

 

 

 

 

 

 

Sergi Jorda (Music Technology Group) reminds us of project objectives

 

 

Gunter Geiger

 

 

 

 

 

 

 

 

ReacTable Systems: Gunter Geiger

 

 

Alba Rosado from MTG on Workpackage 1

 

 

 

 

 

 

 

 

Alba Rosado from MTG on Workpackage 1

Partners

Features

Free, Open Source and Cross-Platform

Free, Open Source (LGPLv3*) and Cross-Platform, the RAPID-API, as one of the main outcomes of the RAPID-MIX project, is a software framework that connects multimodal sensors, including position, movement and biosignals for performance, health and gaming.

Integration with Biosignal Hardware and Sensors

The RAPID-API integrates a set of multimodal acquisitions sensors and hardware tools for performance, health and gaming.

Digital Signal Processing

RAPID-API integrates feature sets of all technologies sound-related into an incredibly sophisticated,  simple to use, cross-platform audio engine. The syntax and program structure are based on the popular ‘Processing’ environment, will provide standard waveforms, envelopes, sample playback, resonant filters, and delay lines, equal power stereo, quadraphonic and 8-channel ambisonic support is included. There’s also Granular synthesisers with Timestretching, FFTs and Music Information Retrieval.

Interactive Machine Learning

RAPID-API will facilitate the rapid development of and experimentation with machine learning in real-time domains such as music performance, health and gaming. It allows users to build interactive systems by demonstrating human actions and computer responses, rather than by programming. It offers a significant range of learning methods including particle filtering, neural networks, AdaBoost, support vector machines, k-nearest-neighbor, decision trees and Hidden Markov Models.

Technologies

JUCE

JUCE

JUCE is a wide-ranging C++ class library for building rich cross-platform applications and plugins for all the major operating systems.

Gesture Agents

Gesture Agents

Gesture Agents is a framework for building collaborative multi-user systems with support for concurrent multi-tasking and shareable interfaces running simultaneously in different applications.

Collaborative Situated Media

Collaborative Situated Media

The Collaborative Situated Media framework provides a set of components for audio processing, motion analysis and collective interaction based on web/mobile technologies (HTML5,  Javascript  and Web Audio API).

XMM – Motion and Mapping Models

XMM – Motion and Mapping Models

XMM is a C++ library that combines multimodal and hierarchical Hidden Markov Models to model gesture and sound parameters, allowing the creation of mappings between gesture and sound in interactive music systems.

Gesture Follower

Gesture Follower

GF is a realtime gesture analyser that outputs “continuously” gesture characterising parameters based on recorded templates, making it suited for selecting and synchronising visual or sound control processes to gestures.

Interactive Audio Engine

Interactive Audio Engine

The IAE is an embeddable synthesis engine in C++ for content based audio processing. The engine extracts audio descriptors from recorded audio materials and provides asynchronous/synchronous granular synthesis and additive synthesis.

Freesound

Freesound

Freesound.org is an online collaborative sound database where people from different disciplines share recorded sound clips under Creative Commons licenses.

Essentia

Essentia

Essentia is an open-source C++ library for audio analysis and audio-based music information retrieval, with prebuilt music descriptors and extractors. It is designed for fast prototyping and for setting up research experiments very rapidly.

HapticWave

HapticWave

The HapticWave is a haptic audio waveform display device that has been developed in collaboration with a group of audio engineers and producers with visual impairments for use in real world recording studio environments.

BITalino

BITalino

BITalino is a low-cost toolkit to learn and prototype applications using body signals. It’s for students, teachers, makers, artists, researchers, corporate R&D… no electrical skills required. 

Repovizz

Repovizz

Repovizz is a cloud service for collaborative data-driven research projects on performance and body motion, supporting structural formatting, remote storage, browsing, exchange, annotation, and visualization of synchronous multimodal and time-aligned data.

MAVEN

MAVEN

Maven is a statistical modelling and machine-learning based framework that uses models of audiovisual attention derived from user’s eye tracking and EEG recordings. It supports the creation of context-aware features for information visualisation and audiovisual scene representation.

Maximilian

Maximilian

Maximilian is a C++ library, designed to ease the use of a wide array of audio features such as synthesis, filtering, FFT, etc. It has a syntax and program structure based on the popular ‘Processing’ environment.
Wekinator

Wekinator

The Wekinator is a software package for building interactive systems by demonstrating human actions and computer responses, and facilitating the rapid development of and experimentation with machine learning in live music performance and other real-time domains.

Gesture Variation Follower

Gesture Variation Follower

Gesture Variation Follower is a C++ library for realtime gesture recognition and variations estimation. It provides methods to easily learn a gesture vocabulary, recognition during performance and estimation of its variations.

Team

Diemo Schwarz

Diemo Schwarz

Diemo Schwarz is researcher–developer in real-time applications of computers to music with the aim of improving musical interaction, notably sound analysis–synthesis, and interactive corpus-based concatenative synthesis.

He holds a PhD in computer science applied to music developing of a new method of concatenative musical sound synthesis by unit selection from a large database. This work is continued in the CataRT application for real-time interactive corpus-based concatenative synthesis within Ircam’s Sound Music Movement Interaction team (ISMM).

His current research comprises uses of tangible interfaces for multi-modal interaction, generative audio for video games, virtual and augmented reality, and the creative industries.

As an artist, he composes for dance, video, and installations, and interprets and performs improvised electronic music with his solo project Mean Time Between Failure, in various duos and small ensembles, and as member of the 30-piece ONCEIM improvisers orchestra.

http://imtr.ircam.fr/imtr/Diemo_Schwarz

http://diemo.concatenative.net

Amaury La Burthe

Amaury La Burthe

Amaury holds a Msc from IRCAM. He first worked as assistant researcher for Sony-ComputerScienceLaboratory and then as lead audio designer for video game company Ubisoft in Montreal. He founded in 2009 the start-up AudioGaming focused on creating innovative audio technologies. AudioGaming expended its activities in 2013 through its brand Novelab which is creating immersive and interactive experiences (video games, VR, installations,…). Amaury recently worked on projects like Type:Rider with Arte and Kinoscope with Google as executive producer and Notes on Blindness VR as Creative director and Audio director. 

Joseph Larralde

Joseph Larralde

Joseph Larralde is a programmer in IRCAM’s ISMM team. He’s also a composer / performer using new interfaces to play live electroacoustic music, focusing on the gestural expressiveness of sound synthesis control. His role in the RAPID-MIX project is to develop a collection of prototypes that demonstrate combined uses of all the partners’ technologies, to bring existing machine-learning algorithms to the web, and more broadly to merge IRCAM’s software libraries together with the ones from UPF and Goldsmith in the RAPID-API.

Frédéric Bevilacqua

Frédéric Bevilacqua

Frédéric Bevilacqua is the head of the Sound Music Movement Interaction team at IRCAM in Paris (part of the joint research lab Science & Technology for Music and Sound – IRCAM – CNRS – Université Pierre et Marie Curie). His research concerns the development of gesture-based musical interactive systems, movement computing, sensori-motor learning with auditory feedback. He holds a master degree in physics and a PhD in Biomedical Optics from EPFL in Lausanne. He also studied music at the Berklee College of Music in Boston and has participated in different music and media arts projects. From 1999 to 2003 he was a researcher at the Beckman Laser Institute at the University of California Irvine. In 2003 he joined IRCAM as a researcher on gesture analysis for music and performing arts.

Hugo Silva

Hugo Silva

Hugo Silva, MSc in Electrical and Computers Engineering from the Instituto Superior Técnico (IST), University of Lisbon (UL). Since 2004, he is a researcher at the Pattern and Image Analysis (PIA) group at the Instituto de Telecomunicações (IT); in 2012 he was a visiting researcher at the University of Florida, integrated in the Computational NeuroEngineering Laboratory (CNEL), and collaborating with the Center for the Study of Attention and Emotion (CSEA). Hugo is a co-founder and Chief Innovation Officer of PLUX, where he has participated in more than 10 national and European projects, funded by grants from the 7th Framework Programme (EU-FP7), the National Strategic Reference Framework (QREN/NSRF), and several other private and public institutions. His work has been recognised in several occasions, which include the nomination as a top talent for 2014 by Notícias Magazine, the selection as one of only 10 semi-finalists worldwide at the Engadget Expand New York Insert Coin 2013, and the Life Sciences Award in 2010 at a venture competition co-promoted by the MIT.

Jean-Baptiste Thiebaut

Jean-Baptiste Thiebaut

Jean-Baptiste Thiebaut obtained a PhD from Queen Mary University, studying human computer interaction in the field of music composition. He was Innovation Manager at Focusrite prior to joining ROLI in 2012, and is leading the product commercialisation and industry and academic partnerships at ROLI.

Xavier Boissarie

Xavier Boissarie

 is the founder of Orbe, Senior Game Designer and project manager.

Sergi Jorda

Sergi Jorda

Prof. Sergi Jordà, PI of this project, holds a B.S. in Fundamental Physics and a PhD in Computer Science and Digital Communication. He is a researcher in the Music Technology Group of Universitat Pompeu Fabra in Barcelona, and a tenured Associate Professor at the same university, where he teaches computer music, Human Computer Interaction (HCI), and interactive media arts. He has authored 20+ articles in journals and book chapters and 60+ peer-reviewed conference papers. He has received several international awards, including the prestigious Prix Ars Electronica Golden Nica. Best known as one of the inventors of the Reactable, a tabletop musical instrument that accomplished mass popularity after being integrated in Icelandic artist Bjork’s last world tour, he is also one of the founding partners of the spin-off company Reactable Systems. He has participated in 6 founded projects both from the EC and the Spanish government, and he is currently the IP of the STREP project ‘GiantSteps’ (FP7-610591).

Panos Papiotis

Panos Papiotis

Panagiotis Papiotis received his B.Sc. in Computer Science from the Informatics Department of the Athens University in Economics and Business, Greece, in 2009, and the M.Sc. in Sound and Music Computing from the Dept. of Information and Communication Technologies of the Universitat Pompeu Fabra in Barcelona, Spain in 2010. He is currently a Ph.D. student at the Music Technology Group in Universidad Pompeu Fabra where he is carrying out research on computational analysis of ensemble music performance as well as multimodal data acquisition and organisation. He has participated in the FP7 project SIEMPRE (FP7-250026).

Sebastian Mealla

Sebastian Mealla

Sebastián Mealla C. (M) holds a B.S. in Audiovisual Communication and a MSc. in Cognitive Systems and Interactive Media (CSIM). He is a PhD candidate in the Music Technology Group (MTG) of Universitat Pompeu Fabra (UPF) in Barcelona, and member of the Musical and Advanced Interaction team (MTG-UPF).
 Mealla’s research work focuses on Human-Computer Interaction and Physiological Computing. In 2008, he received the MAEC- AECID scholarship to pursue his research on neuroscience and HCI at UPF and, since then, has obtained the support of the Ministry of Science and Innovation of Spain (TEC2010) and of Starlab Living Science for the development of Brain-Computer Interfaces for multimodal interaction.

Adam Parkinson

Adam Parkinson

Adam Parkinson is a researcher, performer and curator based in the EAVI research group. He was artistic co-chair of the 2014 New Interfaces for Musical Expression conference, and has organised events at the Whitechapel Gallery alongside the long running EAVI nights in South East London. As a programmer and performer he has worked with artists including Arto Lindsay, Caroline Bergvall, Phill Niblock, Rhodri Davies and Kaffe Matthews, and has performed throughout Europe and North America. He is interested in new musical instruments and the discourses around them, and the cultural and critical spaces between academia and club culture.
Michael Zbyszyński

Michael Zbyszyński

Dr. Michael Zbyszyński is a Research Associate in the Department of Computing, Goldsmiths. As a musician, his work spans from brass bands to symphony orchestras, including composition and improvisation with woodwinds and electronics. He has been a software developer at Avid, SoundHound, Cycling ’74, and Keith McMillen Instruments, and was Assistant Director of Pedagogy at UC Berkeley’s Center for New Music and Audio Technologies (CNMAT). He holds a PhD from UC Berkeley and studied at the Academy of Music in Kraków on a Fulbright Grant. His work has been included in Make Magazine, the Rhizome Artbase, and on the ARTSHIP recording label.

http://www.mikezed.com/

Francisco Bernardo

Francisco Bernardo

Francisco Bernardo is a researcher, an interactive media artist and a software designer. He holds a B.S. in Computer Science and Systems Engineering and a MSc. in Mobile Systems, both from University of Minho. He also holds an M.A. in Management of Creative Industries, from Portuguese Catholic University, with specialism in Creativity and Innovation in the Music Industry. He lectured at Portuguese Catholic University and worked in the software industry in R&D for Corporate TV, Interactive Digital Signage and Business Intelligence, developing video applications, complex user interface architectures, and interaction design for desktop, web, mobile and augmented reality applications. Currently, Francisco is a PhD candidate in Computer Science at Goldsmiths University of London, in the Embodied Audiovisual Interaction (EAVI) group, focusing on new Human-Computer Interaction approaches to Music Technology.

 

Atau Tanaka

Atau Tanaka

Prof. Dr. Atau Tanaka is full Professor of Media Computing. He holds a doctorate from Stanford University, and has carried out research IRCAM, Centre Pompidou and Sony Computer Science Laboratories (CSL) in Paris. He was one of the first musicians to perform with the BioMuse biosignal interface in the 1990s. Tanaka has been Artistic Ambassador for Apple and was Director of Culture Lab at Newcastle University, where he was Co-I and Creative Industries lead in the Research Councils UK (RCUK) £12M Digital Economy hub, Social Inclusion through the Digital Economy (SiDE). He is recipient of an ERC StG for MetaGesture Music, a project applying machine learning techniques to gain deeper understanding of musical engagement. He leads the UK Engineering & Physical Sciences Research Council (EPSRC) funded Intelligent Games/Game Intelligence (IGGI) centre for doctoral training.

Mick Grierson

Mick Grierson

Dr. Mick Grierson is Senior Lecturer and convenor of the Creative Computing Programme and Director of Goldsmiths Digital consultancy. He has held Knowledge Transfer grants for collaboration with companies in the consumer BCI industry, the music industry and the games industry. In addition he works as a consultant to a wide range of SMEs and artists in areas of creative technologies. His software for audiovisual performance and interaction has been downloaded hundreds of thousands of times by VJs and DJs, and is used by high profile artists and professionals including a large number of media artists and application developers. Grierson was the main technology consultant for some of the most noteworthy gallery and museum installations since 2010 including Christian Marclay’s internationally acclaimed The Clock, which won the Golden Lion at the Venice Biennale; Heart n Soul’s Dean Rodney Singers (Part of the Paralympics Unlimited Festival 2012 at London Southbank), and London Science Museum’s From Oramics to Electronica.

Rebecca Fiebrink

Rebecca Fiebrink

Dr. Rebecca Fiebrink is Lecturer in Computing Science. She works at the intersection of human-computer interaction, applied machine learning, and music composition and performance. She convenes the course Perception and Multimedia Computing. Research interests include New technologies for music composition, performance, and scholarship; enabling people to apply machine learning more efficiently and effectively to real-world problems; supporting end-user design of interactive systems for health, entertainment, and the arts; creating new gesture- and sound-based interaction techniques; designing and studying technologies to support creative work and humanities scholarship; music information retrieval; human-computer interaction; computer music performance; education at the intersection of computer science and the arts.

About

The RAPID-MIX consortium accelerates the production of the next generation of Multimodal Interactive eXpressive (MIX) technologies by producing hardware and software tools and putting them in the hands of users and makers. We have devoted years of research to the design and evaluation of embodied, implicit and wearable human-computer interfaces and are bringing cutting edge knowledge from three leading European research labs to a consortium of five creative companies.

Methodologies

We draw upon techniques from user-centred design to put the end users of our tools and products at centre stage. Uniquely, the “users” in our project include both individuals and small companies, ranging from musicians to health professionals, hackers, programmers, game developers and more. We will share what we learn about bringing these varied users directly into the development process. We will also be developing methodologies for evaluating the success of our technologies in people’s lives and artistic practices, and share our experiences of these methods.

RAPID-API

The RAPID-API will be a comprehensive, easy to use toolkit that brings together different software elements necessary to integrate a whole range of novel sensor technologies into products, prototypes and performances. Users will have access to advanced machine learning algorithms that can transform masses of sensor data into expressive gestures that can be used for music or gaming. A powerful but lightweight audio library provides easy to use tools for complex sound synthesis.

Prototypes

We need fast product design cycles to reduce the gap between laboratory-based academic research and everyday artistic or commercial use of these technologies, and rapid prototyping is one way of achieving this. Our api tools are designed to help produce agile prototypes that can then feed back into how we tweak our API to make it as usable, accessible and useful as possible.

MIX Products

We anticipate that our technologies will be used in a variety of products, ranging from expressive new musical instruments to next generation game controllers, interactive mobile apps and quantified self tools. We want to provide a universal toolkit for plugging the wide ranging expressive potentials of human bodies directly into digital technologies.

RAPID-MIX is an Innovation Action funded by the European Commision (H2020-ICT-2014-1 Project ID 644862).