Manufacturing “Hits”: A Data-Driven AI
Approach to Releasing a Pop Song in 2022
Jessica Birk, Rutgers Business School, jessica.bir[email protected]*
Dr. Madhavi Chakrabarty, Rutgers Business School,
madhavi.cha[email protected]u
Abstract - Technology is radically transforming the music industry through the use of big data,
artificial intelligence and machine learning algorithms. This paper presents a pilot study that
examines the impact of data-driven approaches in creating, predicting, and marketing music. A
machine learning algorithm is used to determine the optimal characteristics of the most popular
songs and is used as the basis for creating the next song. Next, an AI technique is used to
generate the inspiration of the instrumentation and lyrics of the song. Finally, a listener survey is
used to determine the trend which includes the mood, preference, and context for the song. The
song is then released on Spotify. The success of the song is determined by comparing the number
of streams to two songs released the previous without the use of data. The results show that
creating and marketing a song using AI models, Spotify listener data analysis, and listener
questionnaires strongly positively impact a song’s streaming success. While this is a pilot study,
the findings of this study can be used more extensively to determine success of music videos and
releases and serve as a guidance to artists with more quantitative insights.
Keywords - Music marketing, AI Techniques, RNN Model, Characteristics of a Song
Relevance to Marketing Educators, Researchers and/or Practitioners A new song inspired by
AI and created by a young artist is released on Spotify. Will the music perform better or worse than
those created purely by inspiration? A pilot study investigates the role of AI in music creation,
production, and release.
Introduction
The Music Industry is a $61.82 billion dollar industry, as of 2021 (Götting, 2021). Yet analysts at
Goldman Sachs predict that it will more than double in size by 2030 to an astounding $131 billion
in size (Goldman Sachs Report on Music in the Air. Insights 2016). Many factors can be attributed
to this growth: increased demand for music, more robust digital streaming services, technological
advancements, and a more democratized environment that allows for independent artists to enter
into it as well. Thus, this rapid growth offers a unique opportunity to get into the industry and make
significant money. Data and Artificial Intelligence (hereafter AI) can be the tool to leverage to do
so.
Prior to 2000, songs were largely made and marketed based on the ‘gut instinct,’ of the
producer. The process was very subjective and less than 0.9 percent of musicians became
mainstream artists (Menyes, 2014). Additionally, music industry executives followed a traditional
business model that overlooked the musicians. Since music was in physical vinyl or CD form,
musicians were “not able to collect a decent rate from sold copies, because record labels would
take most of the profit” from physical sales (Hujran, Alikaj, Durrani, & Al-Dmour, 2020). One of
the first disruptors in changing the way music was marketed, shared, and played was Apple. In
January 2001, they launched iTunes and the first iPod soon thereafter ("A&E Television Networks.
Apple launches iTunes, revolutionizing how people consume music. ," 2019). iTunes was
revolutionary in that it was the first online music marketplace that allowed customers to buy songs
digitally on one consolidated platform. Within eleven years, “75% of all music-related transactions
were digital sales” (Liao, 2019). This new ‘pay per song download’ business model disrupted
traditional music distribution models of CDs, vinyl, and other physical disks. The digital music
format further paved the way for streaming services like Spotify and Apple Music. As of 2021,
99% of Generation Z and 98% of Millennials actively use streaming services (Smith, 2022) and
the trend is on a rise in both the younger and older demographics. Thus, the industry has seen
seismic shifts in how music is consumed, from CDs and vinyl to iTunes to Streaming Services.
The inclusion of big data and machine learning in music is opening up the next chapter of
disruption in the music space. Use of analytics and insights based on quantitative data has changed
the way for measuring success and reach of music. Many streaming platforms already use data
analytics and algorithms to recommend listeners new songs. Spotify has grown its active user base
to more than 29% in 2020 alone (Stone, 2020). Additionally, social media platforms like Tik Tok
and Instagram use algorithms to curate their content streams and recommend new profiles for users
to follow. Users can see data analytics displayed on a page in their profile with Engagement metrics
such as video views, profile views, likes, comments, and shares, as well as follower count and
number of posts created in the last week. Several AI (Artificial Intelligence) algorithms have
proven successful in creating a recommendation and collecting listener feedback.
With the success of AI technologies in the recommendation and review space, AI is now being
tested in content creation. While this is arguably the most “creative” step in the process,
songwriting follows clear patterns and musical conventions that can be learned. It is important to
note that while Big Data and AI algorithms are being used in the industry for more targeted
marketing and streaming predictions, they have still not replaced the traditional techniques. There
is still a level of distrust in products recommended with no human intervention. Thus, the intent of
this study is to focus on the question - How effective is it to use data-focused quantitative and
qualitative constructs to create and market songs? This study is a pilot study that will focus on AI
techniques that use the salient music characteristics, the physiology of the human brain along with
the psychology of the listeners to determine if AI techniques have a future in song creation and
marketing.
Literature Review
The constructs in this study are based on understanding conventions and emotions of music and
its interaction with the brain, previous data driven AI approaches in the music industry and current
trends in the marketing practices in the industry. Each of these areas are detained in this literature
review section.
Music and the Brain: Understanding Conventions and Emotions
To understand the science behind music, music can be thought of as vibrations that our ears can
perceive, evoke the auditory brain, and elicit an emotional response(Zatorre, 2018). The brain
gravitates towards certain songs and away from others, due to the inherent musical preferences we
have. A study led by researchers from Cambridge University found that music listeners can be
broadly grouped into three categories depending on how their brain processes music in the limbic
region (part of the brain involved in our behavioral and emotional responses): Type E
(Empathizers), who “focus on people’s thoughts and emotions,” Type S (Systemizers), who “focus
on rules and systems,” and Type B, who focus on the two areas equally(Wassenberg, 2019). Type
E listeners prefer slow, sad songs with well-written lyrics, while Type S prefer more up-beat,
intense, and repetitive songs. Type B listeners tend to listen to both(Wassenberg, 2019). This
phenomenon for musical preferences can also be seen in brain images as suggested by a brain
imaging study that showed that most musical processing happens in the Broca’s area of the brain,
the same area where the brain processes language. People’s brains are as fine-tuned in “recognizing
musical syntax, just as they are for verbal grammar” (Holden, 2001). This means that peoples’
brains are as sensitive to musical tastes and quick to judge, as they are to normal speech
conventions.
A further study showed that this musicality seems to appear in humans innately, as an
intimate form of communication and bonding. Studies show that babies as young as six months
can follow musical patterns and recognize when small shifts or deviations occur in the music
(Manning-Schaffel, 2019). Lullabies and songs that teach, such as the ABCs or ‘Head, Shoulders,
Knees and Toes,’ are built into early childhood education. We rely on music as a way to teach,
motivate, and relax children. As children grow into adulthood, they switch to different genres of
music such as hip hop, country, or pop. But their joy from music remains. Researchers believe that
this joy comes from our interaction with two distinct systems: first, the system that allows us to
analyze sound patterns and make predictions about them” and second, the system that “evaluates
the outcomes of these predictions and generates positive (or negative) emotions depending on
whether the expectation was met, not met, or exceeded” (Zatorre, 2018). It is this anticipating and
setting of a musical expectation, followed by the breaking or following of this expectation, that
allow us to elicit an emotional response and ‘feel’ the music.
The oldest known instrument is a vulture wing bone flute, found in Germany’s ‘Hohle Fels
cave (Stanborough, 2020). It dates back more than 40,000 years. Researchers and evolutionary
scientists think that music has always played an important role in creating feelings of social
connectedness and mood regulation (Stanborough, 2020). Today, listeners listen to music for the
same reasons and more. They continue to seek out songs to “regulate arousal and mood, to achieve
self-awareness, and as an expression of social relatedness” (Schäfer, Sedlmeier, Städtler, & Huron,
2013). According to research presented by Front Psychol, the functions of music can be defined
into four specific categories: emotional functions, such as inducing happiness, cognitive functions,
such as escapism, arousal-related functions, such as relaxing, and social functions, such as self-
expression and connectedness(Schäfer et al., 2013).
Music works as an emotional regulator by releasing dopamine and oxytocin, while
simultaneously lowering cortisol, which is the stress hormone in the human body (Stanborough,
2020). A similar study found that listening to songs after a stressful life event will allow a person’s
nervous system to recover significantly faster. Thus, music can help us deal with difficult times on
a neurological level. But music does not only help us in terms of stress and anxiety. A further study
showed that jazz music can help lessen mental illness symptoms, specifically depression, by
engaging our reward systems and deactivating the stress systems (Chanda & Levitin, 2013). Thus,
music has a strong influence over our emotions. It’s important to note that Americans spend more
money on yearly music consumption than they do on prescription drugs (Rentfrow, Goldberg, &
Levitin, 2011). One can argue that millions of Americans are prioritizing music as their form of
therapy over medications. This makes music an incredibly powerful, yet understated, form of mood
and emotion regulation. Secondly, music works as a cognitive function by stimulating dream-like
brain activity and improving memory. In one study done on music therapy as a potential
intervention tactic for people suffering from Alzheimers Disease, the results showed that music
could act to slow cognitive decline. While it had only minimal effects on patients suffering from
strong dementia, people with mild to moderate dementia reported a significant uptake in the
remembrance of many episodes from their lives (Fang, Ye, Huangfu, & Calimag, 2017). Similarly,
data from a second similar experiment showed that texts that were sung with a melody and rhythm
were significantly better remembered than the texts that were only spoken (Fang et al., 2017). Tens
of other studies demonstrate these same findings, reinforcing the power of using music to improve
memory. Thirdly, music works in arousal-related functions to calm and relax people. A research
paper published in 2015 by Shanghai University looked into the efficacy of relaxing music on
reducing fatigue and increasing muscle endurance when employees were busy with a mundane,
repetitive task (Guo, Ren, Wang, & Zhu, 2015). They found that relaxing music “alleviated the
mental fatigue associated with performing an enduring cognitive-motor task” (Guo et al., 2015).
Thus, it can be deduced that listening to relaxing background music has a positive effect on
reducing fatigue. Lastly, music works as a social function by emphasizing unitedness and self-
expression. In one study, participants responded to certain phrases. Popular ones included “Music
helps me think about myself” and “Music adds meaning to my life” (Schäfer et al., 2013). Music
forms our social identity and strengthens our connections with others, particularly when they are
fans of a certain artist that we love as well. Interestingly, responses here indicated that to most
people, listening to music is a very private exchange between the listener and the song. This is
important to keep in mind when performing research on the topic, as participants may be less open
or likely to divulge information they deem as highly personal. Finally, a further study used self-
reported mean answers to rank the self-awareness, social relatedness, and arousal/mood regulation
functions of music for participants. They found that most people listen primarily for arousal and
mood regulation, followed closely by self-awareness. Social relatedness scored lower, as seen by
the graph above, but it was still seen as a critical component of the listening experience (Schäfer
et al., 2013). Thus, music functions in many beneficial mental health-focused ways in our society.
Another topic to cover cognitive neuroscience is the importance of lyrics in influencing
human behavior and generating emotion. Lyrics can be defined as “the words of a song”
("Merriam-Webster: Lyric Definition & meaning,"). Lyrics encompass anything that was sung or
rapped or spoken in a song. Songs broadly fall into three emotional categories, as defined by
Zentner et. al: vitality (power), unease (tension), and sublimity (wonder or nostalgia) (Zentner,
Grandjean, & Scherer, 2008). While lyrics were key to defining unease and sadness in songs,
instrumentation was much more important in defining vitality and sublimity in songs
(Williamson). This means that strong, well written lyrics matter a lot more in invoking feelings of
tension and despair in sad songs compared to happier songs. They matter more in the first instance.
And activate the area of the brain responsible for “music chills” (Williamson). Whereas in happy
music, lyrics did not matter because the limbic region of the brain was triggered more, which
focused on the instrumentation and chordal information (Williamson). Additionally, while clear
research supports that lyrics “have the ability to foster meaningful relationships and bring about
positive social change,” they are increasingly becoming more negative and violent (Jones, 2018).
Modern lyrics use fewer instances of collectives such as ‘us’ and ‘ours’ and more instances of
negative, violent words such as ‘kill’ and ‘hate’ (Jones, 2018). Popular topics include misogyny,
drugs, sex, and money, however, listeners tend to ‘sweep the true meaning under the rug.’ However,
recent studies show that listening to misogynistic lyrics change men’s behavior. One study by
Fischer and Greitemeyer studied two groups of men. One group was exposed to music with
misogynistic lyrics, and a second group was exposed to neutral lyrics(Jones, 2018). Then the men
were asked to squirt hot sauce onto sandwiches for women and men. The men who listened to the
misogynistic lyrics squirted more hot sauce onto the sandwiches for women than men (increasing
the spice level and thus the discomfort), whereas the men who listened to neutral lyrics were most
inclined to squirt an equal amount for both genders (Jones, 2018). This study demonstrates that
listening to certain types of lyrics can drastically alter behavior, even if the person is not
consciously aware of the change. Thus, although they are frequently overlooked, lyrics have a
fundamental effect on emotion and human behavior.
Previous Data-driven AI Approaches in the Music Industry
Over the years, there have been several approaches to apply science to music in the form of
algorithms, models, and software. The current approaches fall into three distinct categories: 1) to
create music, 2) to market music, and 3) to analyze and predict a song’s success.
Creating Music
Alan Turing was the first to build a simple machine in 1951 that “generated three simple melodies”
(Chow, 2020). While Turing was arguably most famous for his Turing machine, he was also a great
innovator in the Music AI space. His music generating machine was built in the Computing
Machine Laboratory in Manchester, England. It was so large that it filled most of the lab’s ground
floor (Gage, 2016). Soon thereafter, two researchers at University of Illinois called Hiller and
Isaacson created the first musical work completely written by AI (Li, 2019). It was called ‘The
Illiac Suite’ and consisted of a four-part string quartet. This name came from the computer behind
it, the Illinois Automatic Computer, which “was the first supercomputer to be housed by an
academic institution” (Gage, 2016). It emulated different classical music styles such as Baroque
and Renaissance. The machine worked by manually entering code on paper tape and “waiting for
it to blurt data back out,” which took the form of musical song notation (Gage, 2016). As the
machine got more refined, Hiller decided to incorporate Markov Chains. This is a probability based
mathematical system that bases the current note only on the preceding note. One important thing
to mention is Hillers desire for purity in his machine and experiment. He did not touch what the
computer composed, despite how unnatural sounding it could get, and requested that “very few
people get to interact with [the machine](Gage, 2016). This secrecy only added to public awe in
their perceptions of the truly landmark piece. Following this breakthrough, Russian researcher
Zaripov created the first AI-based algorithm for music creation called the URAL-1 computer in
the 1960s. It used a simple algorithm, binary arithmetic, and vacuum tubes to generate frequencies
(Urnev, 2012). The hardware consisted of vacuum tubes, three memory storage devices, rack
cabinets, magnetic tape, a fixed length of clock-rate, and Electron valve circuits (Urnev, 2012).
While it worked successfully, it was difficult to use, expensive, and extremely large in size. It
would also frequently break due to the delicate nature of its vacuum tubes.
Musicians and researchers alike built off of these initial attempts with more advanced
algorithms. In the 1980s, University of California’s David Cope developed Experiments in Music
Intelligence (EMI). This was a generative model system, coined Emmy for short, that analyzed
existing songs, and created new pieces of music based on them (Li, 2019). His computer was
revolutionary in that it combined the previously seen “rule-based” methods of learning with an
element of surprise. To do so, “Cope developed ‘a little analytical engine’ that could insert some
randomness within the predictability” (Adams, 2010). It was this inserted randomness that made
the songs more convincing and helped drive the narrative of the song forward. It was the missing
piece of the puzzle - a way to incorporate a suspenseful storytelling aspect to it. Cope ended up
pushing one button on Emmy and returning to find that she had “produced 5,000 original Bach
chorales” (Adams, 2010). He took these pieces, filtered them for the most enticing, and released
them on an album called “Bach by Design.” Public sentiment was overwhelmingly positive. They
admired Cope's work because it was “far more than copying, it carries the recognizable DNA of
the original style and fashions it into something recognizable but entirely new” (Adams, 2010).
This was the first time an AI had not just emulated based on the songs it was fed. It had created
something indistinguishably new and different. This field started to pick up in the 1990s, with
David Bowie creating many random synthesizers and a lyric generating app called Verbasizer that
garnered public attention. It was during this decade that researchers began using Random Forest
Algorithms for classification and regression models and utilizing the recent invention of long
short-term memory recurrent neural networks (LSTM) by Sepp Hochreiter and Jürgen
Schmidhube to create better models (Jakupov, 2021). Slowly with time, the AI got more accurate.
In the 2000s, professors once again trained computers to emulate Bach and compose songs in his
style called DeepBach-generated (arXiv, 2020). They then asked listeners to decipher whether a
piece that was being played was written by Bach or AI His model was so effective at imitating
Bach that around half of listeners from the 1,600-sample group could not tell the difference
between a real Bach piece and his emulated piece (arXiv, 2020). It’s important to note that only
75% of participants guessed correctly whether or not it was a Bach piece to begin with. Thus, this
model was extremely effective.
Today, most AI music creation algorithms rely on a deep learning network, a type of
machine learning that uses multiple layers to transform the inputted data (Moolayil, 2020). This
works similarly to early algorithms that were used to train machines, but differently in that it allows
for a high degree of complexity. For example, if a researcher feeds the AI algorithm a C Major
chord, it will break this chord down into its individual notes C, E, and G and then predict another
chord from it. This network benefits from a large amount of data, so in order to create a successful
model, you must “feed the software tons of source material, from dance hits to disco classics”
(Deahl, 2018). Additionally, in order for the algorithm to pick up on musical patterns, it must have
many examples so it can learn these patterns and use them to form its own music. The four current
initiatives highlighted in the next section are the IBM Watson Beat, Google Magenta, AIVA, and
Boomy which appear to have successful results.
The first revolutionary AI technology being used to create music is IBM Watson Beat. This
model uses 26,000 Billboard Hot 100 songs to learn how to compose music through Reinforcement
Learning (RNN) and a Cognitive Color Design Tool to synthesize album artworks (Gredler, 2019).
It then analyzes the “composition of those songs to find useful patterns between various keys,
chord progressions and genres completing an emotional fingerprint of music by year” (Gredler,
2019). Instead of thinking about AI replacing humans making music, the team behind IBM Watson
Beat thought about it as an augmentation of humans (Gredler, 2019). Watson Beat is a tool that
can be leveraged to make music faster and easier to create. It is designed to inspire and push the
boundaries of reality. In one experiment, the team at T3 fed IBM Watson a MIDI file with sounds
from three different instruments at 60 beats per measure. The instruments were a vibraphone,
strings, and bass. Then they left it to learn and create for 100 minutes. When they came back, it
had synthesized an impressive one minute 47 composition using the input as inspiration. It added
a beat, unique effects, and different chord progressions. It would not be possible to tell if a person
or AI created this piece.
The second AI technology being widely used in the Music Industry is AIVA Technologies.
AIVA stands for Artificial Intelligence Virtual Artist. This startup was founded by a team of
researchers who wanted to provide a technical solution for film directors, advertising agencies,
and game developers (Kaleagasi, 2017). Similar to IBM Watson Beat, AIVA uses Reinforcement
Learning (RNN) to “understand the art of music composition… and achieve the best sound quality
possible” (Kaleagasi, 2017). Interestingly, AIVA was the first AI to officially get the worldwide
status of Composer in official documentation (Kaleagasi, 2017). The unique value proposition of
AIVA, however, is its focus on the cinematic style used in many films or video games. To get this
sound, they use classical music from Bach and Beethoven to train it, which also avoids copyright
infringement, since their music is in public domain. AVIAs unique musical soundprint lies in its
use of cinematic strings and soft piano, combined with wondrous, dream-like chord progressions.
The third AI tool helping musicians create music is Google Magenta. This is a research
project by Google AI that seeks to explore “the role of machine learning as a tool in the creative
process” (Synced, 2018). It consists of an open-source Python library powered by TensorFlow and
also uses Reinforcement Learning algorithms (RNN) that generate songs based on a given input
and pattern ("Magenta: Music and art generation with machine intelligence. ," 2020). Many tools
it contains manipulate source data and then use this data to create, test, and train models. These
include Continue, Generate, Drumify, Interpolate, and Groove. Musicians can input their musical
choices, which it will use to create MIDI Files in their directory of choice that they can drag into
Logic Pro, Pro Tools, or a similar DAW (Synced, 2018). This AI tool is incredibly effective for
musicians who want a seamless working setup between a Machine Learning program like this and
their DAW. It is the best tool that complements a musician’s already existing setup, because it can
simply be exported from Magenta and dragged in as a file into a musician’s project.
The last revolutionary AI based software helping musicians create music is called Boomy.
This is the most user-friendly option to create music and does not require programming or machine
learning skills. It consists of a website with a simple UX that allows users to pick a music style
from the list. Then “algorithms create a full instrumental track that can be manually rearranged
and fine-tuned” (Roettgers, 2021). After the song is complete, users can go in and manually add
vocals, edit the composition, and edit the production before saving it to their account. To date,
Boomy users have created an astounding 5,331,192 songs, equivalent to around 5.6% of the
world's recorded music (Boomy, 2022 ). Thus, its power lies in the speed at which it can compose.
Instead of training their algorithms based on “hit” songs, which could implicate them in serious
copyright infringement lawsuits, engineers at Boomy decided to take a “bottom-up approach by
leveraging previous experience in artists and repertoire (A&R) research to train the system to build
organic, original compositions from scratch” (Ramage, 2021). This takes the form of “advanced
algorithms that are doing automatic mixing, deciding what sound should go together—what are
the features of those sounds, how do those fit together, what is the perceived loudness rate of those
sounds” (Ramage, 2021). Boomy encourages users to release their songs on streaming platforms
such as Spotify and Apple Music, but, thus far, no hits from this website have been created
(Tarantola, 2021). Thus, this is an incredibly user-friendly and time efficient method to generate
thousands of songs in a short amount of time, that a person can then go in and manipulate or
customize to their liking.
Marketing Music
Big data helps music companies understand users listening preferences and push certain social
media channels over others for different artists, resulting in more efficient and targeted ways to
reach consumers. The first way data can be used to market music better is through aggregating
streaming data and examining demographics, consumer behavior, and tastes. This gives an artist a
better understanding of their listeners, where they are located, and their age. This, in turn, helps
them target their live shows. For example, if we have an R&B artist whose listeners live in Texas,
Arizona, and New Mexico, it will make more sense for them to play a show in Dallas, Texas than
Missouri or Vermont. As one executive notes, “if you are really asking questions as you’re looking,
you can see patterns and movements of audiences and how consumers are behaving” (Setaro,
2021). While there may be too much information for a person to clean and see these patterns,
computers can find them very quickly. For example, after visualizing the data, one group saw that
“Latin artists overperform on Facebook” (Setaro, 2021). Thus, they quickly started using their
resources more effectively and pushing Latin artists on Facebook, focusing their marketing efforts
there. Additionally, if an artist sees that his or her Twitter page has low engagement and minimal
positive effect on streams generated or tickets bought, they may choose to delete this page and
instead focus on another platform that does a better job of promoting their music. All of this can
be learned from listener or platform analytics through the cleaning, sorting, and grouping of data.
A second way data has helped market music is through social media analytics. This is done
by tracking post likes, mentions, reposts, and followers on Instagram, Facebook, Twitter, Tik Tok,
and other similar platforms. I would like to focus on Tik Tok Analytics, because Tik Tok’s influence
on music cannot be understated. In the last year, 175 trending songs on Tik Tok made the main
Billboard Charts and seven from the Top 10 Rising Artists in December 2021 were pushed up there
from Tik Tok (Charmetric, 2021). Tik Tok allows for sponsored video advertisements, and more
importantly, a space for viral trends to take off using songs. As these trends and songs become
more viral, people often navigate back to Spotify and Apple Music to add them to their playlist.
One example of this is Ankit Desai’s work on analytics for Universal Music. He saw that Tik Tok
was reviving Logic’s song “1-800-273-8255” and that millions of people were adding it and
playing it obsessively. He asked Universal to invest Tik Tok marketing dollars into it, resulting in
the song rising to No. 3 on Billboard Top 100 (Setaro, 2021). Thus, by investigating social media
analytics, they were able to capitalize on a song’s Tik Tok popularity which resulted in millions of
streams.
Understanding and Predicting a song’s success
Another advantage of applying data in the Music Industry is the ability to forecast the success of
a song before its release. This is accomplished by understanding and measuring key features of
successful songs and aggregating customers preferences together. A successful application of such
an approach was the creation of the “Hit Potential Equation” in 2012. This was an equation, created
by a team of scientists from University of Bristol’s Intelligent Systems Laboratory under Dr. Tijl
de Bie, that tried to predict if a song would make it into the Top 5 UK Charts using a machine
learning algorithm (Brown, 2011).
Score = (w1 x f1) + (w2 x f2) + (w3 x f3) + (w4 x f4), etc.
Where: w = weights and f = features of a song
In simplest terms, songs were first scored according to their audio attributes, such as
loudness, energy, length, and tempo using the above equation ("Can science predict a hit song?,"
2011). These scores were then taken and compared to a database of UK Top-40 singles charts for
50 years using a machine learning algorithm. This allowed the scientists to predict whether the
scored songs would “hit” and make it into the Top 5 or “miss”, obtaining a striking 60% accuracy
rate doing so ("Can science predict a hit song?," 2011). One reason for its higher accuracy rate was
that it took into account that musical tastes evolve by slightly tweaking the “Hit Potential
Equation” for each era, setting different weights to different features. For example, they found that
“low tempo, ballad-esque musical styles” were more popular in the 80s, while louder, higher
energy, and more danceable tunes flourished in the 90s (Brown, 2011). Acknowledging these
musical differences and accounting for them in the algorithm helped the model to achieve more
refined predictions.
While this equation was a revolutionary attempt to deploy a machine learning approach in
music, it lacked in several ways. Firstly, it only mined data from the United Kingdom, choosing to
ignore the unique music tastes of the rest of the world. Thus, if artists from the US or Germany or
South Korea try to apply the equation and machine learning algorithm to their songs, they would
find that it is not as accurate or appropriate. Secondly, it only achieved a 60% accuracy rate
(Brown, 2011). While this is better than the 30 to 50% accuracy rate achieved by scientists at Tel
Aviv working on a similar music data project, it is still good enough at predicting whether a song
will make it into the Top 5 in the UK charts slightly more than half of the time. Thirdly, researchers
only took into account twenty-three total features when scoring the songs. While twenty-three
features is certainly better than merely a handful, data-driven approaches are normally more
accurate with more data points and there are clearly some meaningful features still missing from
consideration.
Apps such as Shazam can identify music by “listening” to a snippet of it using a complex
Music Recognition Algorithm. The app is used by more than one billion people across the globe
and is incredibly powerful because it represents peoples’ current preferences and tastes (Whalen,
2019). People use it when they listen to a song they like but do not know or cannot recollect the
title. They can simply open the app, record a small snippet of the song in question, and wait for
the app to tell them which song they are currently hearing. Shazam works by “marrying the audio
fingerprint of millions of songs to a small snippet sampled out of the air in a noisy bar or restaurant”
(Whalen, 2019). In simplest terms, it first picks up the song being played into it. Then it captures
the sound waves and converts it to Frequency Domain, which “acts as a type of fingerprint or
signature for the time-domain signal, providing a static representation of a dynamic signal”
(Jovanovic, 2015). Next, Shazam takes this unique fingerprint and compares different sections and
hashtags of it to its database. Finally, it stores the collected data in its database, which will be used
by Shazam engineers to draw insights regarding current song popularities. By analyzing the data
of song requests in aggregate, Shazam was able to predict two Grammy winners in 2014
“Macklemore & Ryan Lewis” and their album “The Heist” for Best New Artist and Best Rap
Album, respectively (Hujran et al., 2020). It was also able to predict many viral, up-and-coming
songs from new artists, who were then picked up by major labels before they made it mainstream.
Current Music Industry Trends
In order to create a successful song, it is also important to closely monitor and study current Music
Industry trends data. Songs exist in context of the ecosystem that the listener resides in. Each
decade has a unique soundbite. For example, 1920’s was renowned for their jazz sound and the
1970’s focused heavily on a more up-beat disco sound. Songs that did well in one decade might
not do as well in another decade. Thus, social listening and looking at aggregate time-sensitive
industry data are imperative. I have taken this into consideration when creating my own song to
help guarantee that it appeals to listeners in the 21st century.
Researchers studying trends and the predictability of success in contemporary songs used
random forest, a type of AI classification algorithm, to find that songs are becoming more ‘sad’
and less ‘happy’ or ‘bright’ (Interiano et al., 2018). They also found that successful, charting songs
sounded more ‘female’ than other released songs that did not do as well. A second Midyear 2021
study done by Billboard and MRC Data found that rock, indie, and disco are making a comeback.
Additionally, Korean and Afrobeat influences cannot be understated (MRC Data's 2021 U.S.
Midyear Report, presented in collaboration with Billboard, 2021). Thus, using this preliminary
industry data, we see that we might find success creating a darker, sadder song featuring our vocals.
We should utilize streaming platforms to publish our songs, given the immense popularity of
streaming. Lastly, we might also want to include rock, indie, and disco influences to ride on the
current popularity of these genres in the 2020s.
The Fallacy of the “Perfect” Pop Song
While the use of data and AI have worked to democratize music and make it faster, cheaper, and
less complex, there are limitations to their uses. Numbers and statistics cannot explain everything
regarding a song’s success. Listening to music is a highly personal and subjective experience.
Thus, instead of striving to create the “perfect” song the goal of this study is to understand if
analytics from AI techniques can create a song leading to more streams which is a close predictor
of success or failure.
Methodology
The first author of this paper is a musician who has her own spotify channel (görl) which has been
used to complete this study. To set a control or a benchmark, the two latest songs released on the
görl were chosen. The first song titled I Messed Up(Birk, 2021a) and the second song was titled
Ready for Your Love (Birk, 2021b). Both the songs were released in 2021.
In order to manufacture a successful song based on analytics and insights, a three-pronged
approach was used. Firstly, a data scraping of the playlist in Spotify was done to generate the
optimal characteristics of the most popular songs and would serve as parameters for creating the
next song. Secondly, an AI technique was used to generate the inspiration of the instrumentation
and lyrics of the next song. Thirdly, a listener questionnaire was created to better capture the trend
which included the mood, preference and context for the song. Each of these approaches are
detailed below.
Spotify Playlist Data Scraping
The Spotify data was collected to analyze the listeners’ Spotify playlists to discover characteristics
in songs that they enjoy making the songs successful. The different characteristics collected
included the optimal song length, tempo, time signature, loudness, and liveliness for the audience.
Spotify for Artists’ built-in tools and Spotify’s Web API powered by a Python code was used to
extract these features. The Spotify Artists tools helped analyze the listener playlist to which the
görlplaylists were added by the listeners. The top 50 playlists were used to retrieve the metadata
and feature data for the all the playlists’ songs. The output of this process is shown in Figure 1.
The metadata included 1,260 rows of data that included features such as Song Name,
Album, Artist, Release Date, Length, Popularity (in playlist), Acousticness, Danceability, Energy,
Instrumentalness, Liveness, Loudness, Speechiness, Tempo, and Time Signature as shown in
Figure 2.
Figure 2: Metadata of Top Playlists that include görl songs
A separate Music BPM Dataset (https://www.bpmdatabase.com) was used to find the
average BPM of similar songs which included a dataset of 10,000 songs listed by genre, filtered
Figure 1: Spotify Artists Tool view
by the genres of “chill out”, “dance pop”, and “power pop”, and then queried for the mean song
tempo which came out to be 132 (131.99) beats per measure. This data helped provide the
overarching structure of the new song.
AI Machine Learning Models
Two machine learning models that were used to inspire the instrumentation and lyrics for new song
were Magenta from TensorFlow and the Recurrent Neural Networks (RNN) Lyric Generation
Model. Magenta is a popular AI tool that has been frequently used in this area ("Magenta: Music
and art generation with machine intelligence. ," 2020). Magenta Studios is built in with macros
which can be used directly or using standalone applications, such as Drumify, Generate, and
Groove. For this study the model was created using Python code and macros. This helped inspire
the main melody and the bassline of the new song.
This RNN model works by processing sequences, such as words in a sentence or daily
stock prices, “one element at a time while retaining a memory of what has come previously”
(Koehrsen, 2018). Unlike other machine learning models, such as the bag of words model which
goes word by word, an RNN model considers the whole sentence together and the context of each
word before making a decision. A simplistic view of an RNN model is shown in Figure 3. These
models are incredibly effective for application with the English language. Sometimes important
context can only be inferred from the entire sentence, not singular words. Figure 3 (Chollet, 2021),
demonstrates the different input, recurrent connection, and output components. Building the Lyric
Generation Model using RNN and Python was a little more challenging. It was based on an article
from Active State written by Nicolas Bohorquez (Bohorquez, 2021). The data sources were slightly
modified to adapt to the desired song outcome. Since this model works extremely well in creating
lyrics and poetry because of its structure and focus on generating an emotional response rather
than an intellectual one in the reader [or] listener (Bohorquez, 2021). To build the model a dataset
from Million Song Dataset, which contains millions of song lyrics from current popular songs was
combined with another dataset of popular dark academia, indie pop, and dark pop lyrics from the
authors existing repository was used. Since lyrics follow a lot of conventional patterns, they are
Figure 3: A Simplistic RNN Model (F. Chollet, 2017)
easier to learn and emulate than other forms of written work. To improve the accuracy of the model,
the data set was filtering for uncommon words, determining a fixed length for words in the training
set, creating the Long Short Term Memory (LSTM) functions, and varying the temperature
throughout the model’s iterations. Running this RNN model took about 12 days in total, due to the
sheer volume of data. The final accuracy score of the model was 0.87.
Listener Questionnaire
The third and final approach was to understand the listener choices regarding the trend which
included the mood, preference and context for the song. This was accomplished by asking the
listeners to fill out a questionnaire to collect the preferences of the segment of the listeners who
follow the Spotify channel. The questionnaire consisted of eight questions and asked respondents
to share their thoughts, feelings, and opinions on görl’s current music. Some sample questions
included: What is your favorite/least favorite song and why? What emotions do you associate with
our songs? What instruments do you associate with our songs? When do you listen to our music?
What do you want to hear with our new music? Apart from Spotify, data was also collected by
placing the survey on Instagram and Tik Tok. The survey ran for a week and 22 data points were
collected. The findings from the three approaches are presented in the next section.
Findings
The findings of the study are being presented under the same sections of Spotify playlist data
scraping, AI machine learning outcome and Listener Questionnaire.
The Song Characteristics
The first set of indicators for a successful song was derived from this list as an mathematical
average of each of the dimensions. Interestingly, as shown in Figure 4, the listeners enjoy songs
that are an average of 3 minutes 18 seconds in length, very danceable, extremely vocal heavy,
slightly electronic sounding over acoustic sounding, and an average of 118 beats per measure.
Figure 4: The "mean" Characteristics of a Hit Song
The Table 1 shows the comparative characteristics of the two benchmark songs and the final song.
The Acousticness of the song was lowered to 0.41, the instrumentalness was raised to 0.07, tempo
was raised to 118, energy was raised to 0.52 and the speechiness was raised to 0.08. The other
characteristics were not changed as the recommended values after analysis were close to the
average of the benchmarked songs. The final song was created to the exact specifications
recommended by the AI algorithm.
Table 1: Comparative Characteristics of the New Song with the Benchmarked Songs
Benchmark1
Benchmark2
AI Based Song
I Messed Up
Ready For Your
Love
Dancing with Ghosts
3.00
2.58
3.18
0.76
0.87
0.41
0.05
0.01
0.07
-10.8
-8.6
-9.71
86
104
118
0.67
0.79
0.61
0.23
0.31
0.52
0.16
0.09
0.18
0.05
0.05
0.08
4/4
4/4
4/4
AI Machine Learning Models
The model was very valuable in informing the possible lyrics of the new song. The outcome of the
model helped by offering lyrical phrases, thoughts, and ideas. The first epoch generated
nonsensical phrases such as “I’m on the edge of wine and I can seeand “I can’t say my name, I
want to know my name.Yet as the model got more accurate, it started generating phrases such as
I think about youand I’ve got a chance to treat you this way.After the model finished and
reached Epoch 20, the author extracted certain phrases that came across as “extremely clever”.
These included: In the rain on our own”, Never been one to believeand “Like only a dream.
The author used these snippets as her creative cues to determine a melody. The final result was the
story of a girl dancing with ghosts in the rain on her own who then tries to bring a friend with her,
but the friend can’t see the ghosts (Birk, 2022). Since this is a study, the author documented the
song creation time. It ended up taking four hours and 30 minutes to finish, which was significantly
faster than the usual 20+ hours it takes me.
Listener Questionnaire
The raw data for each questionnaire is provided in the Appendix. In identifying their favorite görl
song/s, respondents overwhelmingly chose “Backseat Dreamin’”, “Apricot Air”, and “Do Not
Disturb”. Reasons included: they “flowed well”, “loved the production”, “like the vocals,
harmonies, and vibes of the overall songs”, “the themes are relatable”, and the songs are “catchy
and easy going”. Respondents noted that they didn’t have a least favorite song, for the most part.
They also noted that the two instruments they most associated with the songs were vocals and
piano. Figure 5 shows the words that the listeners used to describe their favorite songs and when
they listened to these songs. These findings helped to understand how listeners value the görl
channel and when they listen to this channel.
Figure 3: Words to describe the vibes of the songs and the places they are likely to
listen to these songs, emotions evoked by the songs and expectations from future songs.
Based on the summary of the questionnaire, it was found that the listeners enjoy the chill, happy,
low-key, safe, relaxed, and excited feels of the songs and associated the music of görl with dreams,
indie, pop, and movie credits, and looked forward to hearing more angsty, darker song with better
vocal editing. In order to see if data-driven methods to create and market a song lead to more
streams, the number of streams of the new release was compared to the streams of the benchmarked
songs.
The baselines songs “I Messed Up” and “Ready for your Love?” garnered 187 streams and
33 streams in their first two weeks, respectively. Averaging the two songs together gets us a
baseline of 110 streams in the first two weeks without the use of data. In the first two weeks upon
its release, “Dancing with Ghosts” garnered 318 streams on Spotify. This is almost three times
more streams compared to the baseline. Since there was no additional promotion for this music, it
is safe to deduce that this increase in streams came from the data related components.
Table 2: Görl Song Streams for Two Weeks
Song Name
Streams for 2 weeks
I Messed Up
187
Ready For Your Love
33
Dancing with Ghosts
318
Discussion
To create the final song informed with the three-pronged approach mentioned above, the musical
information from AI Magenta, lyrical information from my RNN Model, and listener input from
the questionnaire. The resulting song was 3:19 minutes in length, 130 beats per measure, and in
the scale of B minor to add darkness. The voice is light, high, and breathy, whilst adding more
reverb and EQ than we usually would for a better vocal edit. In terms of instrumentation, we kept
it simple with bass, drums, synths, a choir, and piano. After recording each instrument and vocal
take, the final song was mixed and mastered.
Distributing, and Marketing the Generated Song
All the songs including the benchmarked songs and the final recommended songs were placed on
Spotify and Apple Music was submitting it for distribution through CD Baby. This is one of the
many tools that allow anybody to create and submit music. A release date of Thursday, February
24th was chosen because Thursdays have the highest number of listeners and streams, compared
to any other weekday. By collecting listener data from our questionnaire and incorporating their
listening preferences into our song, the AI recommended song resonated more strongly with the
audience. We can also deduce that building our RNN Lyric Generation model made the lyrics feel
familiar yet enticing. Many listeners reached out and complemented the lyrics, noting that they
told a dark love story and were emotionally enticing. Additionally, combining these methods with
a Spotify playlist data analysis and overarching Music Industry research guaranteed that our song
was relevant and musically similar to other songs our listeners enjoyed.
Limitations of the Study
The first limitation encountered was the fact that it was only performed with one song and
one band. Hence this study can be considered a successful pilot study that could form a basis for a
more robust study. However, this study is the first of its kind where the three-pronged approach of
recommendations of the characteristics of the song, recommendations of the lyrics and the trend
from a survey were considered as a whole to create a new song. The results of the pilot are very
promising and in line with the directional results in each of the individual ideas. In order to draw
more widely applicable and accurate insights, this experiment would have to be performed on a
much larger sample of 500-600 musicians. However, this would be extremely difficult and involve
many moving parts. Even when controlling for the majority of elements such as genre or release
dates, there would still be uncertainties of human and procedural errors and a large variety of
listeners with different behaviors, tastes, and preferences.
Additionally, the logistics behind having every musician creating and running RNN
models, spreadsheets, and Python data analysis would be nearly impossible to run smoothly. Thus,
alternative proposals should be considered. If future research were to be conducted, it would be
beneficial to create one singular data analysis and AI platform that musicians could use to
customize to their liking and deploy and measure its success. This platform would be an all-in-one
SaaS (software as a service) platform where users could link their Spotify Artist profile data, fill
out questionnaires about their artistic visions, and input their previous songs. Then an AI Machine
Learning algorithm would look through this data and predict certain song snippets, tempos,
melodies, chord progressions, and lyrical information, among others. It would also have a separate
page solely dedicated to song promotion.
It is also important to note the immeasurable component of music. Songs are creative
processes at their core. Sometimes there are forces that cannot explain why one song does better
than another. When artists try to replicate the success of one song, they often find it impossible to
do. Some songs simply come at the right time with the right sound. Additionally, ranking songs is
a subjective process that no two people do the same. Since musical tastes are incredibly varied and
dependent on personal experiences, the meaning of a song resonates more strongly with one person
over another, even if the song itself is not as well-written or well-produced. There is a personal
connection that trumps the song composition itself that makes the song so alluring to that person.
Even using chart information to determine whether one song does better than another can lead to
Thus, the difficulty in comparing one song to another due to the emotional, personal, and subjective
elements makes it incredibly hard to define what a “good” song is. This, in turn, makes it hard to
tell AI how to write music beyond feeding it patterns, conventions, and lyrical ideas that people
find appealing.
Interestingly, some music executives do not condone using any data in the music-making
process. They argue that data “shouldn’t enter the equation until after the music is made” or we’ll
end up hearing song after song that only emulate what’s already popular and do not differentiate
themselves (Setaro, 2021). While this is an interesting stance, it’s important to note that current AI
and data-driven models do not merely take the popular sounds of today and use them to create
more of the same-sounding music. Instead, they go one step further and learn from these trends to
forecast and create the sounds of tomorrow. While it can be argued that solely depending on data
to make every decision can lead to uncreative, similar-sounding music, that is not how these tools
can be used to their full potential. Data is yet another tool to help us to succeed, and it should
definitely have a place in the entire creative song-writing process because it helps us make better
decisions.
Some executives even went as far as to note that they “have an adverse reaction to trying
to use data to change content” (Setaro, 2021). This is an interesting, albeit naive point. We are
always using data when creating content, whether it be the qualitative input of a friend, advice of
a family member, or view count on our music videos. If we have created two songs, one with
50,000 streams and another with only five streams, human psychology dictates that we try to
emulate the more popular song. Whether we are actively realizing this or not, data is always an
input into our creative process.
Conclusion
The use of data-driven AI approaches to making and marketing a pop song in 2022 directly led to
higher stream counts on Spotify. It was an effective method because it allowed us to better
understand our listeners’ preferences, similar song structures and melodies, meaningful lyrical
phrases, and the structure of a successful song. While it is difficult to say if an AI will ever fully
replace a composer or musician, the exponential growth in AI advancements does point to a world
where this could be possible. AI technologies are now capable of creativity. They can take an idea
and add a unique spin to it. Whether or not one believes they are capable of replacing humans
comes down to the definition of creativity and if they believe it can be taught and learned. While
as of right now AIs are not generating completely original ideas, since their creations are based on
input, there will probably come a time where they can be defined as creative because they get so
advanced that they start to generate new ideas based off of their previous ideas. This would cut out
the human input component. One researcher on this topic is “very fond of Cope's remark that
‘Good artists borrow, great artists steal’” (Adams, 2010). Thus, he believes, as do many, that
creativity is rooted in taking different ideas from others and combining it to make original
creations. This is exactly what AI is doing. Thus, with AI advances showing no signs of slowing
down, it won't be long before a computer can be used to make new versions of every musical genre
that are indistinguishable from human-composed pieces. AI can work creatively and even
innovate, by creating new concepts and exploring new sounds that have never existed before.
While many feel that there will always be a human component to making music, it is hard to ignore
that AI technologies will improve to the point that the outcome of a human process will be
indistinguishable from an AI informed process. AI is already doing it faster, but with more time, it
has the potential to also increase its quality of music to the human level and beyond.
References
Adams, T. (2010). David Cope: 'you pushed the button and out came hundreds and thousands of
Sonatas'. . The Guardian.
arXiv. (2020). Deep-learning machine listens to Bach, then writes its own music in the same
style. . MIT Technology Review.
Birk, J. (2021a). I Messed Up. New Jersey: Spotify.
Birk, J. (2021b). Ready For Your Love. Spotify.
Birk, J. (2022). Dancing With Ghosts. New Jersey: Spotify.
Bohorquez, N. (2021). How to build a lyrics generator using Python and RNNS. . ActiveState.
Boomy. (2022 ). Make instant music with artificial intelligence. . Boomy.
Brown, M. (2011). .Pop hit prediction algorithm mines 50 years of chart-toppers for Data. .
Wired.
Can science predict a hit song? (2011). hys.org. Retrieved from Retrieved March 23, 2022, from
https://phys.org/news/2011-12-science-song.html
Chanda, M. L., & Levitin, D. J. (2013). The neurochemistry of music. . Trends in Cognitive
Sciences.
Charmetric. (2021). 2021 Music Industry Trends and the future of the music business -.
Retrieved from Retrieved March 29, 2022, from https://chartmetric.com/music-industry-
trends/6mo-report
Chollet. (2021). Deep Learning with Python: Manning Publications.
Chollet, F. (2017). Deep Learning with Python: Manning Publications.
Chow, A. R. (2020). Musicians using AI to create otherwise impossible new songs. Time.
Retrieved from Retrieved March 22, 2022, from https://time.com/5774723/ai-music/
Deahl, D. (2018). How ai-generated music is changing the way hits are made. . The Verge. .
A&E Television Networks. Apple launches iTunes, revolutionizing how people consume music. .
(2019). Retrieved March 29, 2022, from https://www.history.com/this-day-in-
history/apple-launches-itunes
Fang, R., Ye, S., Huangfu, J., & Calimag, D. P. (2017). Music therapy is a potential intervention
for cognition of Alzheimer's Disease: a mini-review. . Translational neurodegeneration,
6(2).
Gage, J. (2016). First Recording of computer-generated music – created by Alan Turing –
restored. . The Guardian.
Goldman Sachs Report on Music in the Air. Insights (2016). Retrieved from Retrieved March 22,
2022, from https://www.goldmansachs.com/insights/pages/infographics/music-streaming/
Götting, M. C. (2021). Global Music Industry Revenue 2023. Media.
Gredler, B. (2019). Making music with IBM Watson Beat. T3.
Guo, W., Ren, J., Wang, B., & Zhu, Q. (2015). Effects of Relaxing Music on Mental Fatigue
Induced by a Continuous Performance Task: Behavioral and ERPs Evidence. PloS one,
10(3).
Holden, C. (2001). Neuroscience. How the brain understands music. . Science (American
Association for the Advancement of Science). 292(5517), 623.
Hujran, Alikaj, A., Durrani, U., & Al-Dmour, N. (2020). Big Data and its Effect on the Music
Industry. . Proceedings of the 3rd International Conference on Software Engineering and
Information Management,, 5(9).
Interiano, M., Kazemi, K., Wang, L., Yang, J., Yu, Z., & Komarova, N. L. (2018). Musical trends
and predictability of success in contemporary songs in and out of the top charts. Royal
Society open science, 5(5).
Jakupov, A. (2021). Evolution of machine learning and music : Grace and beauty. . Alibek the
Rookie .
Jones, N. L. (2018). Pop psych: The impact of music and lyrics on emotion. U. niversity of
Adelaide. , Retrieved from Retrieved March 26, 2022, from
https://digital.library.adelaide.edu.au/dspace/bitstream/2440/129144/1/JonesNL_2018_H
ons.pdf
Jovanovic, J. (2015). How does Shazam work? music recognition algorithms, fingerprinting, and
processing. . Retrieved from Retrieved March 23, 2022, from
https://www.toptal.com/algorithms/shazam-it-music-processing-fingerprinting-and-
recognition
Kaleagasi, B. (2017). A new AI can write music as well as a human composer. . Futurism.
Koehrsen, W. (2018). Recurrent Neural Networks by Example in Python. Towards Data
Science(Nov).
Li, C. (2019). A retrospective of Ai + Music. Medium. .
Liao, S. (2019). How itunes changed music. . CNN.
Magenta: Music and art generation with machine intelligence. . (2020). GitHub.
Manning-Schaffel, V. (2019). A musicologist explains the science behind your taste in Music.
NBCNews.com. . Retrieved from Retrieved March 22, 2022, from
https://www.nbcnews.com/better/lifestyle/musicologist-explains-science-behind-your-
taste-music-ncna1018336
Menyes, C. (2014). If you're a musician, chances are you're totally undiscovered, says a new
study. . Music Times.
Merriam-Webster: Lyric Definition & meaning.
Moolayil, J. J. (2020). A layman's Guide to Deep Neural Networks. . Medium.
MRC Data's 2021 U.S. Midyear Report, presented in collaboration with Billboard. (2021).
Retrieved from https://www.billboard.com/u-s-music-mid-year-report-2021/
Ramage, J. (2021). Copyright power to the people: how AI music creation start-up Boomy could
shake up the music industry. . A.I.
Rentfrow, P. J., Goldberg, L. R., & Levitin, D. J. (2011). The structure of musical preferences: a
five-factor model. . Journal of personality and social psychology,, 100(6), 1139–1157.
Roettgers, J. (2021). Boomy's AI makes music faster than humans, but hasn't written any hits -
yet. . Protocol.
Schäfer, T., Sedlmeier, P., Städtler, C., & Huron, D. (2013). The psychological functions of music
listening. . Frontiers in psychology, 4, 511.
Setaro, S. (2021). How data is making hits and changing the music industry. . Music.
Smith, D. (2022). Catalog Releases Now Account for 75% of U.S. Music Consumption — With
89% of Baby Boomers Streaming Songs. . Digital Music News. Retrieved from Retrieved
March 29, 2022, from https://www.digitalmusicnews.com/2022/01/06/audio-streams-
2021-
report/#:~:text=Predictably%2C%20a%20full%2099%20percent,topping%20those%20of
%20other%20nations
Stanborough, R. J. (2020). Benefits of music on body, mind, relationships & more. Healthline
Retrieved from Retrieved March 25, 2022, from
https://www.healthline.com/health/benefits-of-music
Stone, J. (2020). The state of the music industry in 2020. . Toptal Finance Blog.
Synced. (2018). Google AI Music Project Magenta drops beats like humans. . Medium.
Tarantola, A. (2021). AI startup boomy looks to turn the music industry on its ear. . Engadget.
Urnev, I. V. (2012). Electronic Digital Computers URAL-1. . Retrieved from Retrieved March
26, 2022, from https://computer-museum.ru/english/ural_1.php
Wassenberg, A. (2019). Report: Why we like certain music: The brain and musical preference. .
Retrieved from Retrieved March 22, 2022, from https://www.ludwig-
van.com/toronto/2019/05/31/report-why-we-like-certain-music-the-brain-and-musical-
preference/
Whalen, M. (2019). Shazam is the most important new piece of data in music streaming. .
Medium.
Williamson, D. V. Emotional reponses to music: The influence of lyrics. Music Psychology.
[Press release]. Retrieved from Retrieved March 25, 2022, from
https://musicpsychology.co.uk/emotional-reponses-to-music-the-influence-of-lyrics/
Zatorre, R. J. (2018). Why Do We Love Music? Paper presented at the Cerebrum : the Dana
forum on brain science.
Zentner, M., Grandjean, D., & Scherer, K. (2008). Emotions evoked by the sound of music:
Characterization, classification, and measurement. . Emotion, 8(4), 494-521.
Author Information
Ms Jessica Birk is a recent graduate of the Rutgers Business School with a Major in Finance and
Marketing. She has a minor in Music Technology and recently completed her Masters Degree in
Data Analytics. The work presented in this paper is her work as a part of her Honors Thesis. Her
research interests include the use of machine learning in the arts and content marketing.
Dr. Madhavi Chakrabarty is an Asst. Professor of Prof. Practice in the Marketing Department of
Rutgers Business School and was the advisor for the Honors Thesis which was compiled in this
paper. Her research interests include customer analytics, insights, marketing, optimization with a
deep understanding of the digital ecosystems.
Appendix
Questionnaire Raw data