Absolutely! I’ve been wondering about doing the following:
- music gets created on #[3]​
- something like MusicLM is used in reverse to extract a prompt from the music
- the prompt is fed to generative video AI
- that video accompanies the original music
- we have a visual feed section in stemstr to highlight all music that utilizes this approach