In my work, I explore how to most effectively enable users to interact with supervised learning algorithms to compose and perform new music. I have built a general-purpose software system for applying standard supervised learning algorithms in real-time problem domains. This system, called the Wekinator, supports human interaction throughout the entire supervised learning process, including the generation of training examples and the application of trained models to real-time inputs. Already, the Wekinator has enabled the creation of several new compositions and instruments. Furthermore, this system has enabled me to study several aspects of human-computer interaction with supervised learning in computer music. I have used the Wekinator as a foundation for a participatory design process with practicing composers, ongoing work with non-expert users in a classroom context, and the design of a gesture recognition system for a sensor-augmented cello bow.
This research has led to a clearer characterization of the requirements and goals of instrument builders and composers, a better understanding of how to design user interfaces for supervised learning in both real-time and creative application domains, and a greater insight into the roles that interaction (encompassing both human-computer control and computer-human feedback) can play in the development of systems containing supervised learning components. This work highlights how music and other creative endeavors differ from more traditional applications of supervised learning, and it contributes to a broader HCI perspective on machine learning practice.