AI and music in our future…

is it too perfect? Too rigid? To sterile?
Hell yes!!! There’s never a single glitch! I believe the rigidity comes from the perfection. The sterility can be overcome with some degree of satisfaction through reverb and such but as we both know, it will always lack that human "imperfection".
Sorry for blathering on and on. I love this stuff and get carried away. I’ll shut it now. Thanks!
Nothing in your posts can be considered, or would I consider blathering! Even though I rely on the technology to achieve my performances, everything I’ve read gives me pause to ponder when I’m working. The listener might find the music, ummm… unpleasant lol, at least I’m aware of the pitfalls of too perfect, too rigid, too sterile. Thanks for that!
 
@987baz Well said! I still firmly believe that if someone is going to compose/write music that they, themself, at least create the chord progression and melodies instead of relying on a software program to do so. A music theory book is a lot less costly, and a lot longer lasting than a software program.

I agree, but I do use a random generator for ideas sometimes, to spark things off, it's not AI, just a bunch of midi files, that use royalty free music, that you can play with in bite sized chunks, then I change the key, time signature etc etc so I guess in a way that's cheating? but, the way I look at it, is music comes through me from source/DCM/higher self, whatever, so could it not also be inspiring us through technology? just a thought (haha I do like to play devils advocate don't I) Everything I do art-wise music, photography, video etc is guided by my intuition, which in turn, I believe is being guided by the above mentioned. I try to let go of preconceptions and anticipate anything and just let it run through me. I have to admit I do find that much stronger when I am playing my guitar than sitting at the DAW with a keyboard etc so take that FWIW


Food for thought… Two things I learned while working on my Masters; there is no such thing as a perfect performance, just great performers who know how to hide those mistakes. Other than live recordings, what is purchased on a recording is a manipulated rendition, created from multiple takes, sliced up and put back together to create a "perfect" recording.

Exactly right!! as a mixing and mastering guy, that's what we do, we make the song the "best" we can. Now I agree that perfectly aligned and auto-tunes vocals aren't my thing, but it is expected these days. I like the little mishaps, and inconsistencies and I tend to do live takes of vocals because they have more emotion and character rather than being perfectly in pitch etc


Yep. I used to think I really sucked and was too imperfect as a performer. Not easy to wobble through a set without tension and stumbles and brain farts here and there. One great thing about YouBoob is seeing some of my past pro heroes making mistakes and botching riffs or playing simple versions that are easier to nail in live gigs. Good for the self esteem to realize live performances had their flaws and weird stage dynamics and moods I never noticed when I would attend a live show. (same as I experienced) It also makes me appreciate the people who can nail it in real time. I recorded with our band in a studio back in the day. No cut and paste. No retake this or that part. Each person had to get their part right all the way through all at the same time. Intense pressure! (It took 5 or 6 takes per song to get a “perfect” one) So how does this relate to the thread aside from fond memories?

Obviously AI can make songs with no mistakes and perfect meter. I wonder if that level of perfection would leave a discerning human listener feeling flat uninspired and bored? There are subtle meter fluctuations when real humans are involved. A conductor can even induce this lingering or urgency. (And so can the drummer! Lololol) or any player actually. In reality humans will play ahead of or behind the beat just a scootch and that translates into feeling a groove as is the parlance. The brain hears all this. I know there are quantization schemes to emulate this but IDK…is it too perfect? Too rigid? To sterile? I suppose only the music nerds will think they can hear the difference. But Most people will be hypnotized like always, irrespective of the nuances.

Another thing regarding AI being too perfect and predictable and consequently…boring: I recall a Rick Beato vid where he analyzed how long songs were listened to before the listener flipped to the next song. It was an eye opener.

AI "music" will more than likely just sound like top 40 chart stuff, extremely over produced and "perfect" as you say, but most of that stuff is already. As I said in another thread, the way I look at my music is, that like us, it's imperfectly perfect, which took me a while to get, because I did suffer from perfectionism for quite a while, which is pretty debilitating.

I think the "problem" lies with the fact that we have been doing this quantizing/autotuning stuff for so long that when people hear things that aren't it sounds different, and, that can be good or bad depending on your tastes and how you listen to things. But I would say for the most part, the end listener doesn't care at all, they just like the song or they don't, it resonates or it doesn't.

Create what you feel, feel what you create, can't really go wrong there!
 
Create what you feel, feel what you create, can't really go wrong there!
I was told by one of my composition teachers; "Do what you love, love what you do."
I agree, but I do use a random generator for ideas sometimes, to spark things off, it's not AI, just a bunch of midi files, that use royalty free music, that you can play with in bite sized chunks
@987baz Fwiw… well worth the investigation is; The Schillinger System of Musical Composition. The author was proposing electronic and computer based music back in the 40’s.
 
I'm not sure how succesful AI will eventually be in creating music, as songs that resonate with people have something more than just the building blocks that can be described as melody, rhythm, harmony and timbre. These elements have been analyzed in and out already, and the music that people enjoy isn't too complicated, yet no one has invented the ultimate formula to create hit songs, so using AI in this manner could be just a waste of money for record companies in the end.

Most pop songs these days are quite simple and based on few cliche chord changes, that are being reused song after song. I don't mean to say that using only few chords is in itself "bad" or "good", that's just part of the harmonic structure they often have. The real problem and root cause is the sheer lack of imagination and creativity. So if you replace this with AI, does anyone even notice, or does it even matter, if robots make these uncreative songs that have musical "nutritional value" equilavent of a fast food?

I think AI will likely in best cases serve as a useful tool that speeds up the workflow and leaves more room for creative work. Nowadays there are mixing plugins that use AI to do the initial rough mix, putting the levels, eq, compression etc. in place for further processing. There's AI based EQ tools that will highlight possible masking issues between frequencies of different instruments. This is practical and makes sense. Perhaps composers and producers could use AI to come up with some tailored layers or textures needed for the song he/she is working on, instead of scrolling and testing different samples etc. to get the wanted results faster.

Nowadays it's possible to make music from your own home studio and get affordable digital versions of the hugely expensive analogue mixing equipment and instruments that most people could only have dreamed of before. This has opened the door for many musicians to explore and create music on their own terms more easily, I believe. It is of course a different matter whether this music makes it all the way to the general public for listening, and this is where record companies and the media has great deal of influence, as always.

But in a way, I think that genuine music and talent always has a way of finding some audience, regardless of the generally dumbed-down version of music that is being pushed on the masses (there's always some genuinely good songs in the charts too). Although this lack of creativity with art and entertainment (and everything else) is a sign of the times in the West today, and of course the political propaganda and ideological brainwashing that goes along with it, which one has to keep in mind all the time.

But I think autotune (i.e pitch correction) is a similar mixing tool as e.g compression, which can be used when needed to shape the audio so that the music is better conveyed to the listener. Poor use of autotune is perhaps easier to notice when used on human voice. But, in the same way, compression can be overused, resulting in squashed dynamics. If used sensibly, subtle pitch correction can improve otherwise great vocal take that has some unnecessarily sharp or flat notes, so it doesn't have to be re-recorded. But for some reason autotune is often being used to pitch correct everything "to the grid" in a similar manner as overcompressing dynamics or unnecesessarily quantitizing timing.

It also depends on the style of music how much these tools should be used, although some genres use so heavily autotuned vocals that makes them just unlistenable for me.

But in general it's the big picture and the substance of the music that counts in the end. Some songs and their vocals are still great even though the choices with pitch correction (or other processing) could have been done differently and more gently (like many 80's guitar riffs and solos sound great even though they may have had too much use of a chorus pedal).

Just as when watching a movie, it is not so important how many takes have been used in a scene or what kind of techniques or processing decisions have been made in the production, but instead the quality and depth of the storyline, dialogue, acting, atmosphere etc, are what the viewer is looking for and what counts in the end.
 
Although I resisted this for years, what ultimately matters most is the vocal, lyrics and melody. All my “cool” guitar tricks (or AI generated music) don’t matter if they don’t support those 3. And that is where AI should fall short: connecting with the inner emotions and subconscious landscape of the listener.

Could AI conjure up a catchy Taylor Swift hit? I think so, but a human still has to interpret and sing it and have input into the lyrics. Or so I would hope.
 
I think just musically speaking western music has been living through progressively worsening dark ages for years now. At least when it comes to mainstream and semi mainstream music. It becomes worse year after year. I see the AI stuff as continuation of that negative trend. I think Rick Beato has done a pretty good in highlighting exactly that on his YouTube videos for years.

It is hard to understate how bad music has become. It is just god awful, no matter how you look at it. And it isn't just that so called "music" itself that has become shockingly bad, the lyrics too:


My favorite music guy on YouTube sees it pretty much exactly like that as well. I couldn't agree more; "Today’s Lyrics Are Pathetically Bad":

 
Here's another video from Rick Beato (this guy is awesome) that came out yesterday. I enjoyed this a lot. AI and what studios are doing with it is incredible. It's death, basically, of creativity but generates money, money, money for the right people. I do not want to sing a song, run it through a plugin, and sound like Taylor Swift or Drake on the other end... maybe Tom Waits...


I guess I'm dating myself now when I say I still like music played the 'old fashioned' way buy musicians who know what they're doing. Just an example:

 
I guess I'm dating myself now when I say I still like music played the 'old fashioned' way buy musicians who know what they're doing. Just an example:
Totally! However, ironically, this Yes Vid is completely canned MTV style - studio version lip synched - nobody is live except the drummer but his part is prerecorded anyway - see any cables, amps? On the beach in front of a private jet? The keyboard player with the accordion showing his sense of humor. Switching out guitars left and right? (Pretty sure this is before the era of reliable wireless tech, but no - this is not live music.) Same for the risk of electrocution in the wet sand! Who does that? (Err, I have played live standing on grass (no stage party gig) that got damp in the late afternoon. I can tell you first hand that touching a live mic with your lips and no windscreen while standing on the ground is def a bad idea! LOL feels like getting stabbed in the face with a pen knife! ... me: young, ignorant and stupid at the time🤡 )

I think what happens is we SO want to hear the perfect version, that's what we upload - a perfect canned version - and normally live is never that. BUT, live has a different type of energy transference/exchange that happens that is way cool and unobtainable through the studio versions, so, yeah - I agree 100%.
 
Last edited:
Totally! However, ...

Sure, this is not live and I understand the energy behind live performances. The point I was trying to make (which I think you understand but I'll just say this anyway) was that there's no AI in the Yes video; substitute Woodstock if you wish. It's real people. At the rate AI is being implemented, we might see a new music video of John Lennon singing death metal in Japanyiddish on Mount Everest. MTV style video's are now historical evidence that real people made music even if it's 'canned'. Unfortunately, by the sounds of it, everything in these recordings could become the new 'crate digging' for AI.
 
Back
Top Bottom