OpenAI’s MuseNet started streaming on twitch.
MuseNet generates MIDI data. That data is then rendered out using synthesizers that you can hear on the twitch stream.
Composition Quality: As good as most people. The Bach styles are quite good (better than most people). The contemporary styles like Broadway, Movie Themes, and TV Themes are not as good. Okay, most of them sound terrible – but they would be passable if played by people.
Audio Quality: Worse than most people. These renderings aren’t going to take anyone’s jobs, and there is little of use here beyond novelty – but that’s not the AI’s fault. Someone could hook it up to better virtual instruments, and I’d bump this rating up.
Importance: Medium. This isn’t the first MIDI AI to come along, but it is fairly general, in that it can current do a number of different styles. It also has a nice web interface and streams on Twitch sometimes!
Long-Term Prospects: Low. There are a lot of abstractions going on here. The training data is MIDI, so the output is also MIDI. This AI will never learn how to play the violin or pipe organ, but it does a good job of writing notes for a violin. This is a composition helper tool more than a finished-music generation tool.
This technology does a good job of understanding chords and melodies. If you pair this with a better MIDI rendering engine you could create usable background music for public spaces and video games.
The note-based MIDI approach is a lighter-weight solution that doesn’t require as much computing power as generating a 16-bit 44khz waveform directly.
You can try it out now! (through May 15th, 2019)
It only outputs audio files – so the quality you get is the quality you get.