MyoSpat aims to give musicians control over auditory and visual aspects of the performance through easy to learn, intuitive and natural hand gestures. We discuss the use of guitar pedals with the electric harp, and the limitations they impose, and then introduce the MyoSpat system as a potential solution to this issue. HarpCI draws on research from the areas Human Computer Interaction (HCI) and Music Interaction Design (MiXD) to extend the creative possibilities available to the performer, and demonstrates our approach to bridging the gap between the performer/composer and the harp on one side, and the technology on the other. study that explores how gestures can be used to control and transform sound and light projection in live performance with the electric harp. VISUAL STUDIO ZULA EUROPE WH PROFESSIONALWe consider the development of music with live electronics, with particular reference to the harp repertoire, and include interviews with six harpists that use technology in their professional performance practice. The goal of our research is to provide harpists with the tools to control and transform the sounds of their instrument in a natural and musical way. VISUAL STUDIO ZULA EUROPE WH SOFTWAREMyo Mapper: 1) Solves an IMU drift problem to allow multimodal interaction 2) Facilitates an clear workflow for novice users 3) Includes feature extraction of useful EMG features and 4) Connects to popular machine learning software for bespoke gesture recognition. We show that Myo Mapper empowers artists and non-skilled developers to easily take advantage of raw data from the Myo data and work with high-level signal features for the realisation of interactive artistic and musical works. We propose guidelines for using Myo data in interactive artworks based on insight gained from the works described and the evaluation. Together with details of the software, this paper reports on projects realised with the Myo Mapper as well as a. It provides an easy to use tool for musicians to explore the Myo's potential for creating new gesture-based musical interfaces. VISUAL STUDIO ZULA EUROPE WH FREEMyo Mapper is a free and open source cross-platform application to map data from the gestural device Myo armband into Open Sound Control (OSC) messages. The tests employ a range of practice-based and ethnographic research methods to establish applicability, naturalness and usability across a range of approaches to the interaction design of the system. Here we describe three experiments conducted to explore the possible directions for the future of gSPAT’s development. The ultimate aim is to provide a highly natural and musically expressive sound spatialisation experience for the per- former. In order to explore these research themes, we are developing gSPAT: a software and hardware system able to drive live sound spatialisation for interactive audio performance using gestural control based on human-meaningful gesture-sound relationships. Furthermore, gestures may depend on the context in which they have been expressed in other words, they can carry different semantic or semiotic meanings in relationship to the situation and environment or reality in which they have been enacted. Thus, different gestural typologies may relate to the same audio source. Body movements that coincide with sounds consist of both performed ‘sound producing‘ gestures, and ancillary and communicative movements. HCI practices are increasingly focused on creating a natural user experience and embodied interaction through gestu- ral control. Sound spatialisation is an important component in interactive per- formances as well as in game audio and virtual or mixed reality systems. This paper describes the ways in which sound and spatial information are implemented to meet the practical demands of these systems, whilst relating this to the wider context of extant, and potential future methods for spatial audio visualisation. In both cases a means to visualise the spatial position of multiple sound sources within a 3D ‘stereo image’ is central to the system design, so a common model for this task was therefore developed. The second system concerns the spatial manipulation of ‘beatboxer’ vocal sound using handheld mobile devices through already-learned physical movement. The first system forms part of the AHRC-funded project ‘Transforming Transformation: 3D Models for Interactive Sound Design’, which entails the development of a new interaction model for audio processing whereby sound can be manipulated through grasp as if it were an invisible 3D object. In this paper we present the rationale and design for two systems (developed by the Integra Lab research group at Birmingham Conservatoire) implementing a common approach to interactive visualisation of the spatial position of ‘sound-objects’.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |