Creating artistic visual music systems typically demands expertise in complex software like MaxMSP or TouchDesigner and can break creative flow by requiring an artist to work in different environments. While many accessible visual music tools exist, they often lack the artistic flexibility and personalization musicians need. v.4MP is a Max for Live device designed to empower musicians, especially free improvisers, to build customized audio-visual systems that authentically represent their music and artistic vision.
v.4MP achieves this through intuitive, flexible mapping between audio features and visual parameters, including options to train ML models for more complex mappings—all within the familiar Max or Ableton Live UI. By using artist-imported media files as graphic elements, v.4MP supports high variability in visual output and personalization. Additionally, through a machine listening algorithm and variable audio analysis mappings, the system is able to learn and adapt to a performer’s unique musical style and help develop a distinctive visual language.
The creation of v.4MP involved an intensive, iterative UX research, design, and development process aimed at meeting the specific needs of free improvising performers, filling a gap in existing tools and leveraging the latest technological capabilities. v.4MP has already been showcased in multiple performances, including the Georgia Tech School of Music’s Brain Wave Music recital (watch below).
For a full overview of the research, design process, and technical details, view the research report here. v.4MP is free to use and available for download on GitHub. Enjoy!
Creating artistic visual music systems typically demands expertise in complex software like MaxMSP or TouchDesigner and can break creative flow by requiring an artist to work in different environments. While many accessible visual music tools exist, they often lack the artistic flexibility and personalization musicians need. v.4MP is a Max for Live device designed to empower musicians, especially free improvisers, to build customized audio-visual systems that authentically represent their music and artistic vision.
v.4MP achieves this through intuitive, flexible mapping between audio features and visual parameters, including options to train ML models for more complex mappings—all within the familiar Max or Ableton Live UI. By using artist-imported media files as graphic elements, v.4MP supports high variability in visual output and personalization. Additionally, through a machine listening algorithm and variable audio analysis mappings, the system is able to learn and adapt to a performer’s unique musical style and help develop a distinctive visual language.
The creation of v.4MP involved an intensive, iterative UX research, design, and development process aimed at meeting the specific needs of free improvising performers, filling a gap in existing tools and leveraging the latest technological capabilities. v.4MP has already been showcased in multiple performances, including the Georgia Tech School of Music’s Brain Wave Music recital (watch below).
For a full overview of the research, design process, and technical details, view the research report here. v.4MP is free to use and available for download on GitHub. Enjoy!