DescriptionAI-generated audio featuring bossa nova music with electric guitar.png
Demonstration of an algorithmically-generated audio track featuring bossa nova music accompanied by electric guitar, created using Riffusion, an open-source fine-tuned derivative of the Stable Diffusion image-generation diffusion model that has been retrained to generate images of audio spectrograms, which can then be converted into audio files.
An audio spectrogram is a visual representation of an audio clip's frequency content, and images of spectrograms can be converted into audio via short-time Fourier transform, using the Griffin-Lim algorithm to approximate phase during audio reconstruction. While the Stable Diffusion AI model is originally intended to generate visual images from a textual prompt, Riffusion has been retrained from Stable Diffusion v1.5 to instead generate spectrogram images from text prompts describing musical motifs, fine-tuned through the use of Nvidia A10G enterprise datacenter GPUs.
Riffusion generates 512×512 resolution images which each represent 5 second chunks of looping audio; for the convenience of the reader, the three generated spectrogram images have been merged together in GIMP along the x-axis (which represents time), and the audio files have been merged together in Audacity and then converted to OGG Vorbis.
As the creator of the output images and audio, I release this file under the licence displayed within the template below.
Stable Diffusion AI model
The Stable Diffusion AI model is released under the CreativeML OpenRAIL-M License, which "does not impose any restrictions on reuse, distribution, commercialization, adaptation" as long as the model is not being intentionally used to cause harm to individuals, for instance, to deliberately mislead or deceive, and the authors of the AI models claim no rights over any image outputs generated, as stipulated by the license.
Riffusion v1 model
The Riffusion v1 model, created by Seth Forsgren and Hayk Martiros, is released under the CreativeML OpenRAIL-M License and is a derivative model of the Stable Diffusion v1.5 model checkpoint.
to share – to copy, distribute and transmit the work
to remix – to adapt the work
Under the following conditions:
attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License.http://www.gnu.org/copyleft/fdl.htmlGFDLGNU Free Documentation Licensetruetrue
You may select the license of your choice.
Captions
Add a one-line explanation of what this file represents
{{Information |Description= Demonstration of an algorithmically-generated audio track featuring bossa nova music accompanied by electric guitar, created using [https://www.riffusion.com/about Riffusion], an open-source fine-tuned derivative of the Stable Diffusion image-generation diffusion model that has been retrained to generate images of audio spectrograms, which can then be converted into audio files. An audio spectrogram i...