It’s the time of year for saving money!
Unless you’ve been vacationing in your cave in the most inaccessible-to-technology spot on earth for the past generation, you’ve heard the term A.I., which is short for artificial intelligence. I’ve read a number of articles that predict that A.I. capabilities will easily surpass that of a human mind. But can an A.I. dance?
Many music lovers have already experienced primitive types of music A.I. in the form of an “advanced shuffle” feature. Sony has had their SenseMe™ intelligent shuffle feature available on quite a few of their digital playback devices, such as the NW-WM1Z portable player and the HAP-1-ZES Media player. Although the Sony algorithm only uses a few different “characteristics” to its selection process, it is a primitive form of AI. According to Wikipedia SenseMe™ works “by mapping music to a dual axis map based on the mood and tempo of music tracks. Mood and tempo are determined by using the appropriate Sony compatible software, which analyzes music tracks individually and computes the relevant track information. Analyzed tracks can then be plotted onto an intuitive dual axis map…The horizontal axis is based on mood and the vertical axis is based on tempo.” SenseMe™ has twelve categories of music – morning, daytime, evening, midnight, energetic, relax, upbeat, mellow, lounge, emotional, dance, and extreme. And SenseMe™ was first introduced by Sony in 2009!
Obviously, SenseMe™ is an early version of a music A.I. What if it could be expanded to not only analyze the musical characteristics, but also the lyrical content and emotional intent of the music?
With the advent of Siri and Alexa, humans can request data and information directly from a device via voice commands. How much of a stretch would it be for you to say “Alexa, play music that makes me happy.” Instead of “Alexa, play NRBQ.” It could, conceivably get to the point when the user doesn’t even know the name of the music’s creators because it will all be selected by the A.I.
And what about the other side of the music paradigm – creation? We’ve had “computer music” ever since Wendy Carlos first produced her earliest beeps and bipps, but what about music created entirely by an A.I.? I’ve read that many of today’s contemporary pop hits are created by teams of writers, sometimes as many as six on a single song. Is it really a stretch to convert the computing powers of “Big Blue” or a cloud-based A.I. system to the relatively trivial task of creating music? Is it even a stretch to consider that we could eventually have an environmental system that analyzes a human’s physical and emotional state when they enter a room and begins to play music appropriate to that state (or even compensates for undesirable moods by introducing music to counter them.)
By now, the humanists (and any musicians) that are still reading this article could be getting a bit perturbed by the idea of a computer replacing humans in the creation of musical art. I know it doesn’t fill me with warm fuzzy feelings. And if you believe that music is primarily a carrier for emotional content, you could be wondering how a machine, that has no emotions, could possibly create music that does contain emotional meaning.
Could A.I music even get close enough for rock and roll?