Featured Listings (more)

Sunday, January 15, 2017

MusicPostBot909303 - Wavenet "deepdreamed" IDM

This one was spotted and sent my way via Altemark. In short, these are compositions generated by a neural network cloud bot.

The SoundCloud description states the following:

"Bot automatically uploading experimental electronic music created using deep neural network. As of this moment, the network is at very early stages of training, and is more of a synthesizer than a fully fledged music producer. In order to expand on its abilities, I need your help to fund training.

2x 8-core 64-bit
Training rate: ~60 sec/step
Receptive Field: ~125 mSec
Sample Rate: 32 kHz

Help me learn faster:
BTC: 18cDZhzvihYy6Kd9UZipwi4wtEah7AA57X
Paypal: vektor@oxo-unlimited.com

All donations go towards buying more training power!"

The following is some additional info posted by the creator, Veqtor, on Muff's:

"This is project I've been working on, running on two 8-core machines in the cloud. One trains a deep neural network, called a wavenet, on experimental electronic music, the other downloads the latest model generated by the trainer, generates 20 secs of audio, creates a name, uploads to SC and then goes on to do the same with the newest model.
I finally got it up and running tonight, generating 20 secs takes about 5 hours. Dead Banana
Training has been running around the clock for about three months now.

If I had the resources I'd upgrade to running a Nvidia Titan at home, which would speed up training from ~60 sec per step to ~0.5 sec per step. Hoping it'll go viral somehow and I get bitcoin donations to fund cloud training on Amazon Web Services.

I have a long roadmap of improvements but it's hard to work for free doing weird AI stuff"


"maltemark wrote:
What data set of IDM are you using for food to do this? Sounds fucking awesome. Trying to help the viralizing get into motion.

Initially it was my own my music but since around 20k steps ago or so (about 30% of current training) it's been running on a larger dataset also including all the usual suspects like autechre, afx, squarepusher, amnesia scanner, M.E.S.H., etc

maltemark wrote:
EDIT: we need much more powerful processors in eurorack so we can get this as a module paired up with ER-301:s and the likes.

Yes, actually, tensorflow models can be quantized to run simple classification tasks at low rates (think 10 hz / triggered / sequenced) for stuff like generative CV and similar. This could mean that we could possible run a rather simple model for generative music on something like a raspberry pi zero"


"akrylik wrote:
Are the soundcloud posts in order of training duration? If so, doesn't it seem like the results are getting closer and closer to white noise as training progresses?

Very cool idea by the way. love

It might seem that way, but what is really happening is that it "oscillates" between signal and noise as a part of the gradient descent optimizer. This is normal, I hope to put an algorithm into the generation phase that gives up if the current step is generating too much noise (some sort of FFT-based spectral width mean something). Another thing that might be causing the noise is that it could have reached a step that is overfitting a bit, what this means in practice is that, because the network is essentially a super complex filter, seeding it with a very bright and noisy seed will also cause it to produce output similar to that seed."

No comments:

Post a Comment