The Rise of the Machines: You may have seen the videos on Youtube by now – regular people sitting in their home and using an Artificial Intelligence music app, sometimes more than one in conjunction, to create a song from scratch. The results are often mixed, to put it kindly. But importantly, they’re proving that it’s possible to instruct these programs to create an original song, and they will.
You also may have heard about FN Meka, the virtual rapper that was signed to Capitol Records on Aug. 14 of this year, and then dropped on Aug. 23 due to controversy about the developers’ stereotyping of black people through the character.
“FN Meka blurs the line between humans and computers,” read a press release at the time of the signing. “With his over-the-top flexing and extravagant sense of style, he has rapidly amassed billions of impressions across the internet since the independent release of his singles ‘Moonwalkin,’ ‘Speed Demon’ and ‘Internet.’ With over one billion views and 10 million followers on TikTok alone, he is the No. 1 virtual being on the platform.”
So let’s be clear – FN Meka was dropped due to cultural insensitivity, not because signing a virtual being is in any way insane. Next time, they’ll likely get the details right, and all of the industry’s resources will be thrown behind a musician that does not exist.
It would be tiresome to complain about the technology. Back when electronic music started to emerge in the ‘70s and ‘80s, and when DJs received the attention previously reserved for “musicians,” older music aficionados would complain and younger eyes would collectively roll. We don’t want to do that.
But the larger question is: If virtual musicians are getting signed, and if just about everybody can use these AI tools in their home to create original music at the push of a button, how many working musicians, producers, etc., are going to be put out of work? And could the day arrive when we don’t need human musicians at all?
Perhaps, but it’s worth remembering that people don’t buy recorded music anymore — live music is where musicians make their money. Still, in this era of electronic music, it’s not hard to imagine a show featuring an AI character beamed onto screens, while somebody watches over a laptop. In a world where Gorillaz exists, is it so hard to imagine going to see an animated artist?
Diaa El All is the CEO of Soundful, which he describes as “a human-aided AI music creation platform. It helps artists, producers, content creators to TV and film production houses, etc., to create CD quality music at the touch of a button.”
El All is a musician himself, having played piano from the age of three. He also has a degree in sound engineering and production, so he’s lived the life of a working musician and producer. He’s also heard all of the fears and concerns about AI taking over.
“It’s a very interesting thing,” he says. “Music or not, we’re surrounded. Everything around us is powered by machine learning and artificial intelligence, period. From Siri to Alexa to Google Home to everything, really. I look at Soundful a little bit differently. I look at Soundful as, we’re democratizing music creation for everybody, the same way that the phone has democratized video creation. The phone is just a tool in your pocket. Photographers are able to just take videos or photos right away, high quality. There hasn’t been something to really break the barrier of entry to music creation the same way that Soundful has achieved. Not only that, but also helping well-established producers and artists by really being a tool to augment human creativity rather than replacing it. And build on it.”
The key, El All says, is to make the tool simple enough for anyone, literally anyone, to use. Those aforementioned YouTube videos show people, while not using any noticeable musical skills, still getting tangled from a tech perspective.
“This is something that we’ve been working for really hard – how we can simplify it enough so, exactly like the iPhone, you can give it to a 13 year old and they’ll figure it out,” El All says. “Soundful is the same thing, and it’s a powerful tool for the 13 year olds, all the way up to the Grammy award-winning producers and musicians that use our platform as well. It helps to get started on ideas, get creative. In the studio, if they want to come up with a few ideas really quickly, they’re able to use Soundful at the touch of a button.”
Kanru Hua is the CEO and research engineer at Tokyo-based Dreamtonics, the company responsible for creating Synthesizer V. In Hua’s words, “You come up with the lyrics and melody. Synthesizer V sings that for you. And, if you don’t like how it is performed, you can tweak it however you like (e.g., sing this part with POWER). What it means for musicians: there’s tremendous flexibility gained because you don’t need to worry about recording the vocals again following a last-minute change.”
So you can see how the puzzle comes together. You create the melody with a tool such as Soundful or AIVA, lyrics using a random lyric generator (of which there are plenty online) and have Synthesizer V sing them. It sounds scarily futuristic – like if Skynet tried its hand at songwriting. But Hua says that it’s all about how the tech is applied.
“It is probably hard to discuss AI music as a single topic because it depends on how you apply the technology,” he says. “In our case, we’re making a tool for music creators. We’re trying to simplify and accelerate the workflow of vocal production. We heard people worrying that AI is robbing the jobs from musicians. While the creation of certain types of music, in particular when function is preferred over artistic values (e.g., ‘just add some background music there so it doesn’t sound awkward’), can indeed be automated to some point; it is up to the listeners to decide, and one thing AI alone cannot replace is the very notion of a humane identity behind the voice.”
El All agrees, saying that he absolutely does not ever see a world where human musicians are obsolete.
“What will make in my opinion a top song a hit is not the perfection of it,” he says. “It’s the human element that adds the imperfection to the song that makes it unique, that makes it human. Anything is possible, but to me personally it’s the artistic way of the human adding something very unique to the song is what will make it speak to other humans. That’s what will differentiate it. It’s about the art – if it’s a vocal, or if you export the whole project and manipulate it, what makes it unique is what they add on it. At the end of the day, it will never replace humans.”
Reassured? Not yet? A spokesperson for AI tool AIVA said, “AIVA does not intend to replace the human composers, but to enhance their creativity and save time for them. AIVA is just a tool that functions under the composer’s supervision, and the end result will always depend on the taste and the skills of the user.”
Meng Kuok is the CEO of BandLab, a “cloud platform where musicians and fans create music, collaborate, and engage with each other across the globe.” Kuok says that their focus is on empowering creators.
“With AI, we are singularly focused on developing tools to assist creators, instead of trying to replace them,” Kuok says. “For us, writing great songs isn’t just about melodies and arrangements – it’s about bringing to life stories and establishing connections between the artist and fan around the human experience, and that speaks to a greater need for a human touch than ever before.”
Kuuok believes that there will be a negative impact on musicians in certain sectors such as generic commercial sync licensing and background music. But overall, he thinks that the increase in creatives, musicians or not, in the industry will long term be a positive thing. Not everyone will agree.
“With more effective tools, powered by AI, to turn simple lyrics, photos or even videos into starting points to express themselves musically, we believe that more creators than ever before will be able to tell their stories and this will bring even more creativity and talent into the music-making ecosystem, where previously they may not have been able to contribute because of a lack of music education, songwriting experience or even production equipment,” he says. “Though this may have a result in per artist compensation drop due to a vast increase in the number of artists globally, substantially increasing the number of music-makers can only grow the market overall and be a positive boon to the long-term stability and acceleration of the music industry into the future.”
Looking at AI tools as something that can be used alongside existing instruments and technology rather than indulging in fear-mongering does make sense. These interviewees do make some solid points. After all, the invention of the DJ decks didn’t make the guitar obsolete. Even after drum machines came to be, people preferred drums. But it is worth bearing in mind the fact that AI is still emerging technology, and it’s going to keep growing.
“There are a lot of companies that say they’re AI music but just tackle the content creator side of things, or are just helping with composition,” says El All. “There hasn’t been something that is end to end, that produces studio quality music at scale. That’s where you see Soundful playing a part. However, I see our machine learning and algorithms getting more and more advanced, and how we can really tailor everything based on user behavior, also incorporating on the video side of things so that music will be created along with the images in the video. There are no limits to where it’s going to be going, but in the short term, I see it as a massive tool to empower the whole ecosystem of creators.”
Hau says that there are two fronts where vocal synthesis technology is changing the music industry.
“If you look at what has happened to music tech in the past 50 years or so, there had been changes preceding the AI boom,” he says. “It’s like when virtual instruments technology got matured enough, people would create the instrumentals in a DAW, then decide to record it using real drums and guitars or not. We’re now on the turning point when a ‘prototyping’ stage for vocals is becoming possible. I imagine in a few years, this will be part of the standard workflow.”
“The other area is in connection to ‘a humane identity behind the voice’,” he continues. “It doesn’t need to be a human – if you have heard of virtual idols (e.g. Hatsune Miku) – they have been a big thing in Asia. The idea is to create a virtual character that people can relate to and let users create vocal synthesized songs for that character, essentially a distributed form of music creation. We are yet to see this idea becoming popular in the western world.”
Frankly, the whole thing is both exhilarating and terrifying, depending on your viewpoint and level of cynicism. When discussing AI rappers such as FN Meka with TMZ, hip-hop artist Hitmaka said, “The concept of AI is just a culture/vulture type of thing. That AI doesn’t give back to the culture, it just gives back to Capitol Records. We need more people inside of these buildings who are in the culture and are going to say, ‘This is not right, I don’t care if you fire me, but I’m not standing for this.’”
Perhaps he’s right. But we have a feeling that this technology is only going to become more prevalent, in music and beyond.
Editor’s note: The disclaimer below refers to advertising posts and does not apply to this or any other editorial stories. LA Weekly editorial does not and will not sell content.
Advertising disclosure: We may receive compensation for some of the links in our stories. Thank you for supporting LA Weekly and our advertisers.