Digital sound production, however. Yes. There's all kinds of thoroughly unpleasant mathematics, none of which you actually need to know unless you're writing computer music software.
(I write computer music software, and I am also a jazz musician).
From personal experience, pattern-recognition is the most useful "applied math" skill when making music. I use it when identifying intervals between notes, and chord progressions, which you need when you're trying to get the idea out of your head and onto the instrument you're playing or song you're writing.
As an instruction, I think clearly not, the fact that lots of musicians aren't mathematical at all but create great music seems to prove it to me.
But it is interesting to think about musicians who do seem to think about music this way. Bach is definitely a good example where the system of counterpoint is very complex. I'm not sure if she'd describe herself in these terns, but I've always got the impression Laurie Speigel thinks about music a little like that too. Then there's stuff like Coltrane's Giant Steps, where the whole piece is based around a sort of music theory "trick".
So maybe not generally, but there's definitely some awesome music out of that kind of relationship.
Universal in the sense that a number of rocks or a number of sheep can be doubled just as a frequency can?
The notion that there are 8 sub divisions to a doubled frequency interval isn't universal. Balinese Gamelan doesn't even neccessarily have an agreed number of "notes" in an "Octave" from one village to the next.
It's pretty much the foundational idea of any modality. No matter how you divide it up, the purest harmony is doubling or halving.
I was asking to tease out some PoV perspective, again Gamelan doesn't neccessarily have powers of two, or 12, etc divisions of a doubling (or Octave, if we're using that term); it's a non western style of percussion that has a suprising number of local variations (it's essentially near unique to Balinese culture) in divisions and tunings.
The Octave wikipedia entry includes:
Octave equivalence is a part of most musical cultures, but is far from universal in "primitive" and early music
but gets woolly on examples.Cheers for the response, appreciated.
But I suspect there’s a clear biological mechanism which makes it easy to mistake one octave for another from any source of roughly harmonic sound. This is due to the similarity in the overtones of two harmonic sounds that differ by an octave. I would be surprised if this mechanism isn’t universal, although its on various musical systems can obviously vary a lot.
Yes thats what I meant, the doubling of frequency. It might seem trivial but the fact that doubling frequency sounds "right" to humans is actually quite interesting. Why does it sound "right"?
So yes, the 12-tone scale is a universal thing - you want both octaves and fifths in your scale.
(12 is actually too much, so usually that's pared down to something like 4 or 5 or 7 tones, this is where you get cultural variation.)
The obvious exception in the western system would be the blues scale, which arguably has 9 tones (7 equal tempered notes, plus a just tempered 3rd and 7th).
And Indian ragas break all of these rules. They have scales that don't have 8 notes, scales that don't use equal temperament, and even a few scales that don't repeat on octaves.
Math checks out.
> So yes, the 12-tone scale is a universal thing -
I don't follow the logic here though. It's certainly true that a 12-tone / Chromatic scale is ubiquitous within the Western Music tradition .. but the universe is reportedly a little larger.
Even Western Music includes exceptions like the 9-note augmented scale, though the argument can be made that it's a 12-scale with 3 bits "missing" - not a case that can be made about a non-western 7 note percussive scale.
Also the so-called "Western music" standardized on 12 tones very late in the process, long after the Chinese figured it out.
> a 12-scale with 3 bits "missing"
That's all scales, even the "non-Western" ones. Microtonality is added to the standard 12 tone to add tone effects. (Synthesizers in pop music do the same trick.)
https://www.huygens-fokker.org/scala/
Note also that certain musical traditions were suppressed or eradicated due to their unfortunate habit of using dissonant notes such as minor seconds, as opposed to the consonant traids favored by a particular group recently in power around the world. Happy Easter!
As another commenter below has said, "mathematics might be a useful way to understand music", but it's not how compelling music is made.
Mathematics are fundamental to scales and the harmonic series, and knowing about them will help you refine certain choices, but it's not going to help you write a dramatic melody or an emotionally resonant chord progression, or play an energizing rhythm, even if there are mathematical explanations sometimes.
Good music comes from being a good listener, having a strong sense of what's possible, where it could go, and then delivering something surprising. Telling a story with your melody and supporting the arc of that gesture with harmony that accentuates or contrasts it.
Again, there's a mathematical explanation for harmony and dissonance, but players aren't thinking that granular. They're operating at a higher level of abstraction one, two, or three levels above that: They're thinking about telling a story, evoking an emotion, and exciting an audience in the moment.
It's like telling someone they can paint a masterpiece because they understand Fe4[Fe(CN)6]3 makes an aesthetically pleasant blue pigment.
It's a great way to analyse music (e.g. to categorise, understand, and communicate detail), but that does not mean it's a good way to create it. There's a lot of beauty in finding those abstractions and I think that representation appeals to a lot of people here.
Discussions about timbre, instrumentation, and stylistic influence are often symmetric to those about math. When you have 90 minutes to spare, highly recommend strapping in for a listen to https://malwebb.com/notnoi.html.
There's a lot of really incredible musicians, composers, producers, and educators that go deep on the math. There's also plenty that don't. People build mental models in different ways. That's a good thing and a big part of what makes most art interesting.
You are probably aware that there are these things called synthesizers, which exist both hardware and software, complex pieces of technology that can shape sound. There are people who are specialized in creating them (with code and/or electronics), people who are specialized in programming them (creating presets) and people who excel in using them to make music. And many more different profiles who are in between. Each will care about different aspects, they all contribute to making music.
Life is not black and white, and music neither. What is even "good music"? What is your mental model for "the crowd on this site"? In your questions, aren't you reducing the possibilities of learning by putting these into boxes?
The world is big, life is rich and people are much more diverse than what one typically perceives.
All this being said, I think that's a process of convenience and a historical path not a absolute constraint. We have some more flexible means of communicating with the machines today. And I strongly encourage someone to work on a new UI for computer music. "Jazz trio piano, upright bass, and drums. start drummer laid-back, piano blowing over the changes, then piano on top."
https://youtu.be/3poN6FDyB28?is=QjDzlmRQCMMbP_lS
What you wrote described an output, not a UI.
I couldn't figure out precisely what that video showed, but it was fascinating. Somehow it reminded me of the orca music programming environment.
As a software developer I see that LLMs are better at the "craft" of making software.
Software developers training are overwhelmingly analytical.
Musicians will experience the same. That the quality of Ai generated music is superior. But it will come more as a chock for the reasons you explain.
One can roughly prototype a song, giving it the structure, melody, harmony, rhythm, lyrics that a finished song might have, upload it and request a cover in a particular style. The output will often resemble a highly competent human performance.
And not once have I ever felt that these so-called intersections were anything other than contrived.
Of course we can interface with music from a mathematical perspective, but that doesn't mean that we should or that there's anything particularly illuminating to gleen from doing so.
Beyond the very basic math (honestly even that's perhaps too strong a word -- just because something is expressed in numbers doesn't make it _math_) of time signatures and some harmonic concepts up to maybe some of Slonimsky's work, doing so is IMO a fool's errand that exists only to fill space on a TEDx stage.
Think of it this way: if you first saw the word "HELLO". You could deconstruct that and remember that there are 11 lines and 1 circle but that's not how you learn to read or write. You learn letters, which are collections of lines. So you learn the concept of "H" and it having a sound and that it is 3 lines. You then learn to put them together and how you can sound out something thats's written and with varying degree (depending on language) take something said and write it down.
Music theory is like that. Sheet music may be a bunch of circles and lines on a sheet but really it's describing keys and usually a chord-progression. Some sheet music will explicitly just list the chords at the top like A, Em, Asus4, etc.
The 12 notes are constructed from harmonics, specifically 2:1 and 3:2. This part is maths. But the frequencies are adjusted slightly in a system called "equal temperament" where the ratio of 2 adjacent notes is the 12th root of 2.
From there you generally play a subset of those notes (often 7). That's called a scale (eg major or minor scale). The chords in that scale can then be identified by a Roman numeral within a key. So the I chord in the C major scale is a C. The IV chord is the F. Depending on the starting note of the scale you'll get sharps (#) and flats (♭) to denote that they are a different pitch. An easy way to remember this is that the white keys represent those whole and half steps with just the white keys (starting from C). As an aside, so does the A minor scale.
Why do I say all this? Because a huge amount of modern music is simply a I-IV-V chord progression within whatever scale you're using. So if you know a little theory, you can choose a key and a chord progression that will inherently sound nice together. There's more to it of course but understanding what a key is, what chords are and what a chord progression is is a pretty good start.
It's like Escher; he didn't have any clue that his intricate work would excite mathematicians and crystallographers.
Mandatory reference to GEB
An interesting note has a fundamental and harmonics and allows analogies to be drawn in RF engineering and quantum mechanics: https://www.google.com/search?q=any+good+parallels+between+i...
Sure, there have been plenty of attempts to distill music to a mathematical essence. Certainly the ancient Greeks tried this, and traditional counterpoint resembles math in a number of ways. But at the end of the day, mathematical descriptions of math and music theory more generally are more useful as descriptive tools to help give language to what people are doing musically and to understand why we perceive some things as sounding better than others.
Starting with numbers can be good in some respects, like understanding the circle of fifths or how scales are built out of intervals, how chord progressions and harmony work and how to reharmonize, all of which can be augmented with a solid conceptual understanding. But at the end of the day, your ear and creative spirit are your primary asset when it comes to creating good music. This is why computer-generated music has been so bad up until AI took over. Great for building arpeggiators or backing tracks, but good luck creating a beautiful melody in a purely numerical rule-based system.
Logic only works in the context of definite ontologies. But audio frequencies are continuous, not discrete. It really is all vibes at the end of the day.
Plug for Angine de Poitrine for a contemporary example of music that breaks the rules that define traditional music.
Music today is utter crap at all levels, this is a verifiable scientific fact.
This is probably why.
Music "theory" was invented as a critical tool (i.e., basically to enable reviewers to describe and evaluate the music of the time), not as a composition tool.
Basically, we're holding it wrong and it's doing us harm.
No it's not, and it's not a verifiable fact. Unless you have a source? Rick Beato knocks the sami-ness of 'the charts', but there's more to music than that...take a look at who he interviews.
Beethoven improvised his pieces on the fly and performed them himself. This wasn't considered as something out of the ordinary at the time.
Can you imagine the average conservatory graduate imporovising anything today? Even a pentatonic blues riff?
Clearly we went off the rails bigtime somewhere along the way. The framework we're using to teach and compose music is actively hindering us.
But just a bit before that in the foreword written in the present day, bars AI scrapers from reading or referencing the materials under any circumstances!
Anyway, this seems fantastic and I'll definitely be spending some time diving in.
My first thought seeing this post was, I need to find more literature like this, fine-tune a model with that + Logic Pro documentation, then give it an MCP to control Logic Pro and see if it can be my music production assistant.
Btw, I have a feeling that if you want to learn about computer music, you can send the PDF to LLM and ask what the chapter is about and how to represent it using csound or supercollider.
My experience is that with computer music, you have to keep experimenting and listening in order to truly understand and innovate.
[1] Mark Newman's new book: The Science of Music (2023):
https://lsa.umich.edu/cscs/news-events/all-news/search-news/...
It says it was originally published by Wiley in 2009, and the rights reverted to the author in 2025, whereupon the author released it on the net for free.
These aren't resources for getting started. They're more like encyclopedias for learning about DSP and tech once you've established the fundamentals of music and sequencing.
If a beginner wants practical knowledge for making records with electronic instruments I'd give them a DAW, teach them to record and sequence, teach them basic music theory, and then point them to something like Ableton's synthesis tutorials that will teach them about oscillators, envelopes, filters, LFOs, and basic sample manipulation.
That's 80% of the necessary skills right there.
> An algorave (from an algorithm and rave) is an event where people dance to music generated from algorithms, often using live coding techniques. Alex McLean of Slub and Nick Collins coined the word "algorave" in 2011, and the first event under such a name was organised in London, England. It has since become a movement, with algoraves taking place around the world.
I work professionally with music, including using ableton. I do create but don't sell/adverise, I'm strictly 'backstage'. I love everything about creating music, less so for reading about music (reviews, critiques, dissecting) though there are occasional exceptions. Are you putting your creativity online publicly?
I like to mix electronic and orchestral sounds. Some examples:
https://soundcloud.com/emmets-music/the-seven-hills-of-rome
https://soundcloud.com/emmets-music/dark-matter
I also like piano a lot (I only know how to play guitar badly, I just draw in the notes and work velocity until it sounds good to me):
https://soundcloud.com/emmets-music/life-is-delicate-remixed
https://soundcloud.com/emmets-music/soledad
Do you put yours online? I enjoy listening to other musicians. People give soundcloud a lot of grief but I love the service and the musicians I have met there.
https://archive.org/details/latelegrafiarapida.eltritecladoy...
Math rock, and microtonality.
Introduction to Computer Music, by Prof. Jeffrey Hass