Waterloo researchers’ technology can help musicians find inspiration and new creative directions for their songwriting. Members of the University’s Natural Language Processing Lab developed LyricJam, a real-time system that uses artificial intelligence (AI) to generate lyric lines for live instrumental music.
For several years, the lab has been researching creative applications of AI under the direction of Olga Vechtomova, a Waterloo Engineering professor who is also cross-appointed in Computer Science. The initial work of the lab resulted in the development of a system that learns musical expressions of artists and generates lyrics in their style.
Vechtomova recently developed technology with Waterloo graduate students Gaurav Sahu and Dhruv Kumar that uses various aspects of music such as chord progressions, tempo, and instrumentation to synthesize lyrics that reflect the mood and emotions expressed by live music.
LyricJam, a real-time system that uses artificial intelligence (AI) to generate lyric lines for live instrumental music, was created by members of the University’s Natural Language Processing Lab.
The initial work of the lab resulted in the development of a system that learns musical expressions of artists and generates lyrics in their style. As a musician or band performs instrumental music, the system receives raw audio clips on a continuous basis.
As a musician or band performs instrumental music, the system receives raw audio clips that the neural network processes to generate new lyric lines. The lines can then be used by the artists to create their own song lyrics.
“The system’s goal is not to write a song for the artist,” Vechtomova explains. “Instead, we want to assist artists in discovering their own creativity. The system generates poetic lines with new metaphors and expressions, potentially leading the artists in previously unexplored creative directions.”
The researchers’ neural network learns which lyrical themes, words, and stylistic devices are associated with various aspects of music captured in each audio clip. For example, the researchers discovered that lyrics generated for ambient music differ significantly from those generated for upbeat music.
The research team conducted a user study in which they invited musicians to perform live instruments while using the system. “One unexpected finding was that participants felt encouraged to improvise by the generated lines,” Vechtomova said. “The lines, for example, inspired artists to structure chords differently and take their improvisation in a different direction than intended. Some musicians used the lines to see if their improvisation produced the desired emotional effect.”
Another study finding highlighted the co-creative aspect of the experience. Participants described the system as an uncritical jamming partner who encouraged them to play their musical instruments even if they were not actively attempting to write lyrics.
The latest Lyric generation builds on the second generation device’s strong market success two years ago. We’ve added a new processor and electronics with low power consumption, and we’ve improved the product’s moisture resistance. In addition, we present a completely new and simple-to-use programming wand. The new Lyric generation, which incorporates cutting-edge technology, provides users with a more natural hearing experience as well as increased programming flexibility for Lyric partners.
LyricJam has had over 1,500 users worldwide since it went live in June of this year. The team’s research has been pre-published on arXiv and will be presented at the International Conference on Computational Creativity in September.