In May 2019, I participated in a hackathon called “Blockathon de la Musique” and was teamed up with four awesome people. Together, we came up with the idea of “Artelligence,” which uses AI to create images that can be used as album covers to help musicians.
The tool uses a GAN model (GANgogh) to create images in a particular stylistic category or genre, such as rock, classical, or jazz. A pre-selection of (standard) images is offered, inspired by cover artworks from the Discogs database, to help users generate ideas and designs. The user can upload an audio file, and the genre of the song is extracted from integrated databases or intelligent audio analysis in combination with descriptive text. From there, the user has three options to select genre, audio, and text. Each option is powered by AI, which generates images related to the user’s selection. The user can then select an image they prefer and edit it further.
The benefits of having artwork and design in a musician’s album cannot be overstated. It helps the artist stand out from the crowd and can be tailored to the target audience. However, most independent musicians do not have the time, skill, or budget to hire professional designers for their releases. This is where the AI-powered image generation tool comes in, providing a quick and easy solution to a common problem faced by DIY artists.
The team came up with the idea for the tool as part of the “Blockathon de la Musique” hackathon, which aimed to solve the problems modern-day musicians – especially starting musicians – face with the use of technology. We conducted an online survey of potential users, particularly artists, and designers, to gain insights into the challenges they face when creating album artwork. The survey revealed that the biggest challenges were to exactly match the mood of the music, find an idea that represents the published title well, be in harmony with the musical theme, reflect the image of the artist, and capture the mood of the song.
Sleepless Night
Machine learning wasn’t as fast as it was expected when done on a laptop and we only have overnight to make the machine learn about the music genres, some songs and lyrics, and images. While the developers are cramming to make the machine learn the set of samples, me and the other members are brainstorming on how we would make this a proper product. So we did a quick research for use cases and came up with a simple user flow and prototype.
Survey
As part of an online survey of potential users, particularly artists, and designers, we asked the question “What is the biggest challenge if your artwork is for music?”
The most frequent answers were as follows:
• to exactly match the mood of the music
• to find an idea that represents the published title well
• to be in harmony with the musical theme
• to reflect the image of the artists and capture the mood of the song
• to tailor it to the target audience
Sitemap
As part of planning and brainstorming process, we focused on having a clear function of our solution. Since we conducted a survey, the insights we got made it easy for us to narrow down the features we want to present.
Here is the flow we have defined:
From the dashboard, the user has a goal to create an image for his album cover. He has 3 options to select from: Genre, Audio, Text. Each option is powered by AI which generates image that are related to the user’s selection. The user can select an image he prefers and edit it further.
Quick Prototyping
The tool was developed in just 30 hours, with the team using a quick prototyping process to design the interface. The team’s hard work paid off, as we won 3rd place in the hackathon.
Reflections
Coming up with a solution within 30 hours was stressful, but I loved the challenge it brought. It forced me to think out-of-the-box and I feel like I’ve grown as a designer. I was also grateful that I was able to meet awesome people and had the chance to work with them.