Real or AI? This is the question that many newspapers have asked themselves in recent days. The object of investigation is Sienna Rose, a soft soul-jazz singer with over three million monthly listeners on Spotify. While articles about “her” multiplied, the creators of Rose capitalized on the peak of popularity by releasing an EP entitled Date Night. It's a sort of concept that tells of a romantic evening through seven songs with mellifluous tones and incredibly didactic titles, from “Candlelight Dinner” to “Sleeping Snuggled” through “First Kiss”, “Hotel Room” and “Without Clothes”. Date Night it is the perfect representation of everything that is wrong with music created with the help of AI: it imitates what exists, trivializing it.
A few days ago a song entitled Jag Vet, Du Är Inte Min of which Jacub was excluded from the official Swedish chart despite being a hit on Spotify in that country. Reason for the announcement: the vocals and some instrumental parts were created with AI. Recorded music is a devalued asset and therefore particularly vulnerable to “attacks” of this kind. If something doesn't have much added value, you can replicate it, if a piece of music is already anonymous and standardized, you might as well have it created by machines. Where the possibility of distinguishing with absolute certainty the real from the artificial ceases, taste should come into play, but similar songs are aimed at an audience less and less accustomed to giving value to the peculiarities of a musical performance. During a study conducted on 9000 subjects aged between 18 and 65 in eight different countries by Ipsos commissioned by the French platform Deezer, 97% of people were unable to distinguish “real” music from that created with artificial intelligence.
Thinking “I'm not part of that 97%, I know how to distinguish real music from fake music” does not protect us from the consequences that the use of AI will have on the world of music. Those with a trained ear immediately grasp the truth that is present in the Grateful Dead's records and that is not present in those of Velvet Sundown, but at least in this historical phase the consumption of those who listen to music passively via playlists compiled by algorithms and editors also counts, especially if they are so-called functional tracks, i.e. created to accompany activities or moods, music that is “used” and not listened to for its other qualities. That's where music generated by and with machines is spreading.
According to research by the French platform Deezer, already 28% of the music that is streamed daily is created with artificial intelligence. We are talking about 30 thousand pieces a day. They are tracks that are almost entirely lost on the platforms and are not listened to by anyone, but every now and then some song emerges from the pile and becomes a preview of the future that awaits us. The growth of the phenomenon is notable. At the beginning of 2025, songs created with AI were 10% of the total. At this rate, they could take away part of the earnings of small-medium sized authors and musicians and cause an erosion of the quality of the music produced, which would become even more standardized than existing music.
What can be done to stem the invasion of these digital body snatchers, of songs created partly or entirely by AI systems? Last week Bandcamp, a music service spread throughout the world that has the reputation of being close to the needs of independent artists, provided its answer. In order to safeguard creativity and allow musicians to continue making music and listeners to be confident that what they hear was created by humans, the platform has released two new guidelines on generative AI: 1) “Music and audio generated wholly or largely by AI is not permitted” and 2) “Any use of AI tools to impersonate other artists or styles is strictly prohibited.”
There are those who bet that cases like those of Breaking Rust, Sienna Rose, Xania Monet and Velvet Sundown will be increasingly numerous and relevant. Bandcamp's initiative was therefore welcomed as a step forward compared to what was done by Deezer, which decided to label the pieces created with AI so that the user is at least aware of what he is listening to. It is an approach that only partially convinces Holly Herndon, PhD at the Center for Computer Research in Music and Acoustics and an artist at the forefront of the use of artificial intelligence.
Herndon is an optimist in a world of prophets of catastrophe caused by the ever-increasing use of machines. At the center of his research are the ethical use of data, i.e. obtaining the consent of those who originated it, and the idea that artists should be the ones to train AI, appropriating the technologies that define the present. In a world of infinite media where anyone can become a creator, artists are called upon to redefine their identity. The Time made her a 2023 list of the 100 most influential voices in AI. In short, she is not a popular artist, but she is relevant. He says that the measure adopted by Bandcamp is right and adds, however, that a Manichean position on the use of AI can be counterproductive and penalizing. The pervasive presence of AI in the creation of music cannot be stopped by such bans, especially since if they are generic, they are unable to distinguish between the creative and “lazy” use of the new tools.
According to Herndon, the human-AI dichotomy no longer has any reason to exist and indeed in the long run it will become a mere question of appearance. What seems human is not necessarily human and is not necessarily better than what is partly artificial. Faced with technological mechanisms that are unfathomable because they are extremely complex and hidden, we risk making clear and superficial choices. “I have more ownership of my AI models than most pop stars have of their songs,” Herndon writes on X and it's hard to blame her. It is therefore harmful and ahistorical to condemn the use of AI in itself: it is a technology that is changing the way we look at the world, demonizing it is useless, it would be more useful to delve deeper into the topic and be able to distinguish those who use it in an inventive way.
The idea behind Bandcamp's stance, namely that music “generated totally or largely by AI” should be banned, «is understandable in the case of bots that publish 1000 generic songs a day. This is a spam problem though. And I feel compelled to oppose the ban on human artists from experimenting with an era-defining medium.”
Herndon gives an example: she writes a song and wanting to add a new level of production, she feeds it to a model that she trained herself using not the work of others, but her own. Upon inspection similar to that of Bandcamp, a musical creation of this type will seem as artificial as a dirty song made with Suno. It will seem even less human than a pop song generated entirely by Suno with a simple prompt and then re-recorded by a real singer and band.
«A lazy hypothesis about AI is that lazy people use it and that more motivated people use traditional tools instead. It's nonsense: why shouldn't human beings eager for new sounds use models that allow them to dig endlessly down that rabbit hole that is the creation of sounds?
The climate of uncertainty of this era, the aversion for mediocre music created with AI, the feeling of being cheated by those who do not declare how much of what we listen to has been generated by a machine should not lead us to express reactionary judgments on the legitimacy of those who create music with new technologies. Is it possible that a piece made by a producer using a beat contained in a pre-packaged package is considered more human than a song made using AI as a creative tool?
