AI

Seth Forsgren, Co-founder and CEO of Riffusion – Interview Series

Seth Forsgren, co-founder and CEO of Riffusion, led the development of AI-powered music generation tool that creates audio tracks from text prompts. Riffusion allows users to try different music styles and sounds in real time, making music creation easier to access. The platform is designed for creativity and ease of use, allowing anyone to explore AI-generated music without formal music expertise.

Riffusion is the best AI music generator I have tried in person and is a recommended tool for users who are interested in AI-generated music.

Can you take us back to the early days of the revival? The initial spark led you to build AI music generation tools?

The revival began with a hobby project between two lifelong musicians. My co-founder Hayk and I have been playing in an amateur band for over a decade and we have been fascinated by Creative Act. One afternoon we wrote a song in my backyard and while looking for inspiration, we started using early AI models that could produce images from scratch. But what we really want is a tool that can make music with us, and the AI ​​we work with to think of new melodies and sounds that no one has ever heard. There was nothing more than that than anyone else at the time.

When did you realize that it has the potential to become a full-fledged company?

When we share our hobby projects with a few friends, the turning point is coming and everywhere. It’s not just interested technicians or AI enthusiasts – professional musicians, producers and millions of everyday music lovers are involved in ways we never thought of. Some of our favorite artists in the world start using samples they created using Riffusion!

The project also inspired Google, Bytedance and others to start their own AI music work in our work, and it’s clear that it’s not just an experiment — it’s the basis for bigger things. As a company, we now have the opportunity to bring this new tool to creatives everywhere.

What are your biggest technical and business challenges when moving from experimentation to commercial products?

We have come a long way in terms of technology. Our first models produced grainy, five-second low loyalty music clips, and now we can produce full-length high-quality songs with excellent controllability and expressiveness. This has made significant progress in model architecture and has been constantly rethinking things from scratch. It’s an honor for the outstanding researchers on our team, and we’ve come so far and we know it’s still possible for technology to begin.

In terms of business, we have to think deeply about the revival in the music industry. AI music is still new, and while we see incredible adoption by amateur creators and professionals, there is still an ongoing conversation about how AI and human creativity coexist. Our focus has always been on enhancing musicians’ abilities, not replacing them – providing people with new tools to explore their creativity in ways that never occurred to them.

The imminent focus is on producing short musical improvisations, but can now form a full-length work. What advances allow you to expand its capabilities?

By training our own basic models from scratch, we are able to improve the quality, expressiveness and controllability of Riffusion output. The development and release of our latest model Fuzz has brought a major breakthrough. In blind testing, Fuzz always outperforms the competitive model when giving the same lyrics and sound tips, and the model is uniquely designed to help users find their personal voices – the more time the user spends using RIFFUSION, the more fuzz it will be. Their personal tastes and the more personal the music generated. We think this is a huge difference in revival.

Many AI music models struggle to maintain emotional depth in music. How does improvisation capture the nuances of different emotions and styles?

Music has a profound personal and emotional nature, and we hope that the revival of music that resonates on a human level. As our consultant Alex Pall said, “It’s not making a sound. It’s about making people feel special through the sound.”

Just as a well-made violin allows artists to fully express their abilities, we also train models to become instruments guided by the creativity of users. Whether you enter a melody, text prompts, or even images that can adapt to your intentions, the output can be shaped to reflect different emotions, dynamics, and style choices. We focus on users who come back day after day to make great music on the platform.

As AI-generated music continues to evolve, how do you think of it as complementing rather than replacing human creativity?

AI is the musical instrument of musicians, not the substitute. Throughout history, new music tools from synthesizers to digital audio workstations – expanding what artists can create without reducing human artistry. Revival follows the same philosophy. We see Riffusion as an instrument that encourages musicians to experiment, collaborate and try new forms of storytelling. Artists still bring souls and intentions to their music, and AI helps bring these ideas to life. We are happy to embrace this tool every day and find joy in the creative process.

How do musicians and producers respond to improvisational abilities? Have you seen any unexpected or innovative uses of this tool?

The response was incredible. A few years ago, some people started using these tools, but now the number of professional musicians and producers embracing technology is expanding dramatically every week. We have seen artists use the upcoming replay of new melodies, make new sounds, and even create entire albums. Some have fused Riffusion generations with real-time instrumentation to create a whole new genre. One of the most exciting things is seeing how people take this tool and make their own tools themselves, whether it’s producing music from natural sounds, testing experimental works or scoring for filmmaking.

Now that long forms of music are coming, are you seeing the potential to get AI-generated scores in movies, video games, or other media?

The ability to produce long format music undoubtedly makes improvisation a powerful tool for the larger media landscape. We’ve seen the interests of filmmakers, game developers, and content creators who want to adapt to their narrative’s unique scores, perhaps even real-time. It’s clear that AI can help storytellers express themselves across patterns, and we’re at the beginning of this field.

Looking ahead, what is your ultimate vision for revival? How do you see it shaping the future of music creation?

Today, only a small number of people make music, but creativity is inherent to all of us. By building tools that reduce barriers to entry, while also improving the ceiling of sound possible, the imminent will become an important tool. For anyone looking to create, experiment and connect through music, I think it’s coming soon. Whether you are a professional producer or someone who has never written a song before, we want Riffusion to be the instrument that helps you find your sound.

Thanks for your excellent interview, users interested in producing some music should visit Riffusion.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button