More specifically, I think the part that seems copied is at 2:13 of the original[2], as it leads into a solo-ish bit which in the AI version sounds similar still, but goes on to do its own thing:
The only things they have in common are vibes (in the contemporary sense, not vibraphones). Two dudes singing about sex in falsetto at 120bpm over prototypical R&B/funk elements isn't special. If that's the bar for copyright infringement then 99% of the popular music canon is illegally-derivative. Marvin Gaye was a singular talent but that doesn't mean that his heirs should be able to collect money every time somebody plays an electric piano bassline and sings about making whoopie in alto II.
1) They had a musician come in and deconstruct the songs, and she showed that many of Blurred Lines elements copied the “rhythm” and “feel” of the sheet music, not the master recording.
2) Robin Thicke said in an interview he told Pharrell they should make something with the groove of Got To Give It Up.
As a non-musical person in a jury, those points are convincing enough that there was intent to copy the song even if the final song is clearly different. Though, it should never have been down to a jury and judge to decide.
Even more hilarious, a couple of years ago the Gayes tried to sue Pharrell because of an interview where he mentioned again Got To Give It Up was inspiration for Blurred Lines. Luckily that failed, but they definitely have it in for Pharrell it seems.
The closest thing I've seen is virtual drummers in Logic X which will follow along with the structure of your song and generate a percussive accompaniment. It's no substitute for a real drummer but it's serviceable.
One of the most fun things about jamming with other musicians is that things change with the ebb and flow of the band's energy. i always miss it when I'm just fooling around at home with a looper.
No real clue how someone would use them for a more serious endeavor, only thing I could imagine would be to quickly iterate/prototype with song structures on a fixed seed to generate ideas for a real composition. Consider the case of an indie game developer or film maker getting some placeholder music to test the experience during early throwaway iterations.
Another more valid subset would be something like a music bed for a video or podcast etc.
A third use is for spamming streaming platforms and making money off undiscerning suckers.
Lots of potential musicians / producers that can write a catchy tune, lyrics, create midi work, etc; but maybe can't play / don't own the instruments they want to use (could be disabled) or maybe don't have a great singing voice. These ai tools can lower the bar for more people to create music at a higher level. It can also act as a improvisational partner, to explore more musical space faster.
As a personal anecdote of where AI might be useful, as a hobby I occasionally participate in game jams, sometimes working on music / sound effects to stretch my legs form my day job. One game jam game I worked on was inspired by a teammates childhood in Poland. So I listened to a bunch of traditional Polish music and created a track inspired by said music. I'm pretty happy with how it came out, but with current AI I'm sure I could have improved the results significantly. If I were to be making it now, I would be able to upload the tracks I wrote, see how the AI might bring it closer to something that sounds authentic, and using that to help me rewrite parts of the melody where it was lacking. Then I could have piped in my final melody with it's inauthentic midi instrument (I neither own, nor play traditional polish stringed instruments) and used it to make something that sounds much closer to my target, with a more organic feel.
New types of electronic instruments.
We’ve been able to use analog circuits, digital logic, and then computers to generate sounds for decades… aka synthesizers.
I would love to see synthesizers which use music generation models under the hood to create new sounds. And / or new interfaces to create different types of instruments.
There’s a lot to explore here, in the same way there was (is) lots of exploration of electronic music starting I suppose with the Theremin in the 1930s.
i could see something like this baked into an editing tool that allowers video editors to specify a tone or style of music in plain language to serve as background music.
You'd think those would be easier to achieve than something that tries to just replace me completely.
Some very desirable features are just not available as plugins (or I didn't find them), like enhancing the recording quality: this is only available as paid services aimed for podcasters so work on spoken voice only.
Again, the problem is that most of the offerings are trying to leverage the neural network for some complete solution, in the way replacing the steps professionals are perfectly able (and need to decide on) to take themselves. I'm constantly looking for specialized solutions that do the job that's impossible to make manually. The best example is Demucs for stem-splitting: it does one job and leaves me to work on the rest.
i kept plucking away at it until i got it to a point where it could generate sheet music and guitar tabs in the style of various artists.
would be fun to revisit that project with fresh eyes.
Junk food is OK every once in a while, but if that's all you eat you're not going to be healthy. I believe the same is true for the media you consume.
Edit: Also, to add to your analogy, junk food made it so people's palates became accustomed to high amounts of sugar, fat and salt, to a point where some people "Don't like water". If you've only known pizza rolls and chicken tenders, you're unlikely to enjoy a chicory salad or a even a moussaka. And if all your entertainment is call of duty, marvel movies and AI generated music, you're unlikely to enjoy Weather Report, Rachmaninoff or Pynchon. I'm not saying that to to be elitist, but we've got to stop dumbing down culture and entertainment
it was boring decades ago and its boring now
See: Synthesizers, Digital Synthesizers, overdriven guitars, reverb effects (instead of recording in special rooms), Midi, Multitrack recording, studio musicians, overdubbing, digital recording, autotune.
Every time anything comes out that makes music production easier people bitch about it. Never fails.
I've personally made 3 albums back in my teenage years using nothing but a microphone, a computer, instruments that I played, and friends. I know how hard it is and how much it sucks to not be good enough for others to care about your music that you put your heart and soul into.
I agree that anyone who tells a computer to make them a song and then calls it their song is full of shit.
BUT
If someone writes the lyrics to a song and wants to hear the ways it could sound like, that's still art.
If someone generates a backing track and then performs over it, that's still art. After all, almost every social media influencer turned musician did exactly that, buying the "beat" and then performing over it, and they still get millions of views and sometimes even perform their fancy karaoke for crowds of adoring fans.
There is a line and it is a very clear one. If a human is involved in a critical portion of the song's creation, it's art. If not, it's slop.
All of those required critical thinking and knowledge from musicians
The issue is not that an artist will use AI to split tracks on an album to practice, or use AI to come up with grooves for their VSTs, the issue is that you have AI bands on streaming platforms hoping to make money. Small magazines, publications etc (see clarkesworld) are being flooded by AI slop because people want to make a quick buck. Big players want nothing more than being able to prompt a model and get free hits with nobody to credit or pay royalties to.
If you dont like it dont support it.
Sure it does. It commodifies your passion.
You can still play to amuse yourself (if you're asocial enough), but now you're playing in a world that has your passion, skills, and sweat devalued and cheapened: comodified.
Because the folks that stand 10-toes-down thinking they are solving some neat problem simply don't have the context to understand their idiocy.
They ain't gonna get it. You're not going to get them to understand because they don't believe the result of artistic work is to change the artist as much as it is to produce the art.
There is an iterative process in which artists struggle to make something which, in turn, changes the artists in ways that, in turn, changes what artists want to make, which then, in turn, changes what they make.
I'm not an ML person, but as I understand them this feedback loop is precisely -not- what an LLM can do- it's not gathering info about how it has failed, which is largely what good artists are doing.
Still, the discussion been helpful for me, as as I have discovered why I find the idea that it's somehow useful to randomly generate music to be so gross:
that "understanding" of music fundamentally mischaracterizes a complex process.
What I find so odious is that these kinds of projects are saying: "you are this- this is what you're doing".
And while that misunderstanding not true (and thus I agree that folks should have their dumb toys) I also understand why almost anyone who has ever worked on art finds that reductive statement about their praxis to be offensive and is thus correct to be offended when folks casually express it.
I made my peace with commodification of art and music a long time ago- I will charge for my dance band's performance tonight simply because I want the gas money and it selects for folks who are willing to give a little to get a little.
It's not commodification of my music, insofar as there's no point at which there is a Real "I can trade x minutes of music for N feet of cotton string", but it's close enough that I understand why the hardcore and psychotic capitalists would think they are the same thing.
In a certain sense this is why it's probably wise to push back against the LLM folks just like it's good to push back against a totalizing capitalism:
- the capitalists reduce everything to the commodity form, yet there is always an excess they can't write into their ledger
- the LLM folks reduce art to outputs, yet there is always the context which frames the work which they can never bring back into that work.
So I'm fine with "commodification" of music- one of the deep contradictions of the culture here is that at the same time there is no ethical consumption under capitalism there is also always an excess that doesn't get soaked up into the money form.
However, what I still haven't sorted through is the idea that folks think music is somehow just "making noise in some kind of ordered manner", and thus that if they can find the statistically likely ways that noise is ordered then they are doing what I am doing when I make music.
I find that idea highly objectionable because it reduces a process that's taken all of my life up to any given point to become comfortable with, a process that I deeply grapple with daily, and reduces it to its output.
For instance, I have been playing real book jazz tunes on clarinet with a couple of the jazz majors at a bar in my town for the last 5 or 6 Friday nights.
These kids are great- they have chops and ears and curiosity. What I also like is that they don't have a lot of baggage about their professionalism, so they don't mind if I sit in on clarinet with my not-pro chops.
That's a fun situation; my 45-year-old-self gets to hang out with a bunch of kids in their 20s and learn a bunch of tunes ("All Blues" or "Four") that never got to learn because I was playing bass in punk rock bands at their age.
And what is better, last night the bass player couldn't make it so I called a buddy of mine who has been playing guitar in big bands since the mid 1960s to sit in with these kids. We all had an excellent time- I swapped onto piano for a while and he had to play bass for a while, the 21 year old guitar player got to pick with another player almost four times his age in a very equal setting.
That situation is radically complex, in that it contains so many levels of interaction, generationally, musically, contextually, and so on, that it becomes impossible to reduce it to a single axis. It's likely impossible to fully trace out why it was so satisfying to all the folks (audience and musicians) involved.
However, the singularly least satisfying way of understanding the situation there would have been to record it and say "this is what it was", and even more insane would be to say "this LLM generated thing which sounds similar is the same thing".
My point here isn't to shit on folks who are so disconnected from the reality of making music as to think that there is a need for -more- of it (randomly generated by machine): those folks are already living a hard enough existence. And there is always already so much music in the world that a use case for generating more of it randomly is dumb.
Fuck those folks for their interpolation of my activities, but I mostly just feel sorry that they are so disconnected from something I find so helpful.
Rather, my point is that the process of trying figure out what to say and how to say it changes people, and the LLM generation skips all that work.
I like a forklift as much as anybody, but thinking that dock workers are doing the same kind of thing as people in a weight lifting gym and therefore automating the weight machines is its own kind of insanity.
And, further, my point is that it took writing all that out in essay form and revising it to really understand how deeply that feeling of disgust at LLM-generated music really runs in me.