MATRIXSYNTH: MEML


Showing posts with label MEML. Show all posts
Showing posts with label MEML. Show all posts

Sunday, July 20, 2025

MEMLNaut - Total sound control with machine learning


video uploads by MEML

A 2025 MIDI INNOVATION AWARD entry

"Bring your sounds to life with the MEMLNaut, the first neural symbiotic controller for synths and effects: discover new sounds instantly, live, with any rig."

Learn more about the MEMLNaut and Musically Embodied Machine Learning: https://musicallyembodiedml.github.io/

Some info saved for the site archives:



"MEMLNaut is a musical instrument and research platform for exploration of musically embodied machine learning. It can process and generate sound, it’s extensible with a wide range of IO and interfaces, and it can optimise and run machine learning models on it’s dual core microcontroller chip.

MEMLNAut is open source, you can find all the Kicad files on our github"

"MEML is funded by the UK Arts and Humanities Research Council. It is an investigation into the musically expressive potential of machine learning (ML) when embodied within physical musical instruments. It proposes ‘tuneable ML’, a novel approach to exploring the musicality of ML models, when they can be adjusted, personalised and remade, using the instrument as the interface.

ML has been highly successful in allowing us to build novel creative tools for musicians. For example, generative models that bring new approaches to sound design, or models that allow musicians to build complex, nuanced mappings with musical gestures. These instruments offer new forms of creative expression, because they are configurable in intuitive ways using data that can be created by musicians. They can also offer new modes of control, with techniques such as latent space manipulation. Currently, to train a ML model, standard practice is to collect data (e.g sound or sensor data), create and pre-test the model within a data science environment, before testing it with the instrument. This distributed approach creates a disconnection between the instrument and the machine learning processes. With ML embodied within an instrument, musicians will be able to take a more creative and intuitive approach to making and tuning models, that will also be more inclusive to those without expertise in ML. Musicians can get the most value from ML if the whole process of machine learning is accessible; there are many creative possibilities in the training and tuning of models, so it’s valuable if the musician can have access to the curation of data, curation of models, and to methods for ongoing retuning of models over their lifetime.

We have reached the point where ML technology will run on lightweight embedded hardware at rates sufficient for audio and sensor processing. This opens up innumerable additions to our electronic, digital, and hybrid augmented acoustic instruments. Our instruments will contain lightweight embedded computers with ML models that shape key elements of the instruments behaviour, for example sound modification or gesture processing, responding to sensory input player and/or environment. This project will demonstrate how Tuneable ML creates novel musical possibilities, as it allows to create self-contained instruments, that can evolve independently from the complex data science tools conventionally used for ML.

The project asks how instruments can be designed to make effective and musical use of embedded ML processes, and questions the implications for instrument designers and musicians when tunable processes are a fundamental driver of an instrument’s musical feel and musical behaviour."
NEXT PAGE HOME



Switched On Make Synthesizer Evolution Vintage Synthesizers Creating Sound Fundlementals of Synthesizer Programming Kraftwerk

© Matrixsynth - All posts are presented here for informative, historical and educative purposes as applicable within fair use.
MATRIXSYNTH is supported by affiliate links that use cookies to track clickthroughs and sales. See the privacy policy for details.
MATRIXSYNTH - EVERYTHING SYNTH