Mitchell Clark at The Verge:
Google researchers have made an AI that can generate minutes-long musical pieces from text prompts, and can even transform a whistled or hummed melody into other instruments, similar to how systems like DALL-E generate images from written prompts (via TechCrunch). The model is called MusicLM, and while you can’t play around with it for yourself, the company has uploaded a bunch of samples that it produced using the model.
The examples are impressive. There are 30-second snippets of what sound like actual songs created from paragraph-long descriptions that prescribe a genre, vibe, and even specific instruments, as well as five-minute-long pieces generated from one or two words like “melodic techno.” Perhaps my favorite is a demo of “story mode,” where the model is basically given a script to morph between prompts. For example, this prompt:
electronic song played in a videogame (0:00-0:15)
meditation song played next to a river (0:15-0:30)
fire (0:30-0:45)
fireworks (0:45-0:60)
Resulted in the audio you can listen to here.
More here. [Better examples here.]