GestureAgents is a framework for building multi-user systems with support for concurrent multi-tasking. Shareable interfaces, those that can be used by more than one person at a time, can play an important role in the collaboration; but for that to happen, systems must support participants to develop different tasks simultaneously. GestureAgents provides a solution to implement such systems and to recognize the gestures of different applications running simultaneously.
While the HCI community has been putting a lot of effort on creating physical interfaces for collaboration, studying multi-user interaction dynamics and creating specific applications to support (and test) this kind of phenomena, it has not addressed the problem of having multiple applications sharing the same interactive space.
Having an ecology of rich interactive programs sharing the same interfaces poses questions on how to deal with interaction ambiguity in a cross-application way and still allow different programmers the freedom to program rich unconstrained interaction experiences.
GestureAgents is therefore a framework that can be used to coordinate different applications, in order to have concurrent multi-user multi-tasking interaction and still dealing with gesture ambiguity across multiple applications.
Read more here
Check out the repository here