What are all the musical parameters or elements?
from sopularity_fax@sopuli.xyz to nostupidquestions@lemmy.ca on 26 Nov 15:45
https://sopuli.xyz/post/37286236

Looking at it from a joint DAW/editing perspective + music theory perspective

#nostupidquestions

threaded - newest

sopularity_fax@sopuli.xyz on 26 Nov 15:46 next collapse

Gain/Amplitude(?)

= Volume

Assassassin@lemmy.dbzer0.com on 26 Nov 15:55 collapse

Amplitude, timbre/tone, frequency, sample rate, harmonics. Not entirely sure what you’re asking, but these terms are all related.

sopularity_fax@sopuli.xyz on 26 Nov 15:56 collapse

Is frequency distinct from pitch? I havent wrapped my head around this potential distinction

Assassassin@lemmy.dbzer0.com on 26 Nov 16:05 collapse

So the only reason I separate those two is because pitch is inherently tonal and defined by a single fundamental frequency, where as frequency itself is an entire domain that does not require tonality, if you’re not using a simple sine wave.

For me, pitch is about assigning a sound a role in melody and harmony. Inspecting and modifying the constituent frequencies of the sound is more about engineering and technical work more than creative artistic work.

If I’m composing, I’m focused primarily on pitch. If I’m mixing and sound designing, I’m working with the whole frequency spectrum.

sopularity_fax@sopuli.xyz on 26 Nov 17:01 collapse

Example:

  • Frequency = 440hz
  • Pitch = A(4)

Right?

Also, can you talk more about sampling rate?

Assassassin@lemmy.dbzer0.com on 26 Nov 17:08 next collapse

Yes, but sounds are rarely a single frequency. Their timbre and identity are tied to the amplitudes of frequencies above and below the fundamental.

For example, if you played A 440 with:

A sine wave- you only get a peak at 440hz. Pure fundamental. A saw wave - you get 440, then 880, then 1320, etc. (every harmonic) A square wave- 440, 1320, 2200, etc. (every odd harmonic)

sopularity_fax@sopuli.xyz on 26 Nov 17:10 collapse

Heres a question: doesnt every sound contain all the frequencies below it since it is at least equal to or greater than them? Like if I have $10.00 i also have every amount up to and including $10

Reminds me of that math thing with the !factorial

How does composition or sound effects exploit that truth if so?

Assassassin@lemmy.dbzer0.com on 26 Nov 17:38 collapse

No, sound is a collection of discreet frequencies. If every sound contained every frequency lower than the fundamental, sounds wouldn’t really work. If you want to hear what that would specifically sound like, play a clip of white noise, then move a low pass filter across the spectrum. The low pass filter’s location would be equivalent to your fundamental frequency, with the white noise giving you sound on every frequency below it.

Sound isn’t an additive sum of every frequency up to a point, it’s better to think of it as a collection of sine waves being played at the same time, but with different amplitudes relative to a fundamental frequency. In fact, that is the entire basis of additive synthesis.

sopularity_fax@sopuli.xyz on 26 Nov 17:42 collapse

Ya i think factorial was a bad analogy, i didnt necessarily mean additive, more a vague impression that everything should have some internal euphony/harmony which that which is below it frequency-wise but that sounds insane now that i say it aloud haha

Assassassin@lemmy.dbzer0.com on 26 Nov 17:53 collapse

All good. There’s such a crazy amount of science and math that goes into sound. It’s hard to really nail down a good metaphor for it sometimes. It took many years of music production to learn all of this stuff, and I still don’t understand how everything works.

sopularity_fax@sopuli.xyz on 26 Nov 17:58 collapse

I honestly understand why so many mathematicians are also musicians and vice versa, also since I got into dabbling with coding math and music both make more sense to me on those grounds

Assassassin@lemmy.dbzer0.com on 26 Nov 18:00 collapse

Yuppppp. Ignoring lyrics, music is really just applied math.

Assassassin@lemmy.dbzer0.com on 26 Nov 17:25 collapse

So sampling rate is not always important, but when it is, it’s very important. It’s effectively the audio version of frame rate for video, it’s how often the amplitude of a signal is measured. This is important because 1/2 of your sample rate determines the highest frequency that can be produced (Nyquist theorem). This is because sound waves have both a positive and negative cycle that need to be measured.

Most audio that you run into on a regular basis is 44.1khz, with stuff designed for film/tv being 48khz. This means that your upper limits for frequency reproduction are 22.05khz and 24khz. This is why EQs typically cut off at those specific points. Any audio with a frequency above those points will not be able to be represented accurately. Since human hearing decreases in sensitivity rapidly as you approach 20khz, this really isn’t a huge issue, since that information would not be audible.

The fun starts when you drop your sample rate below 44.1khz. This limits the upper boundary of what frequencies can be reproduced, which in turn produces loss of information and a very recognizable form of distortion that is reminiscent of old school video games. I really don’t have a solid enough understanding of the exact mechanics behind this distortion to speak confidently, but pretty much every DAW under the sun has a plugin for it, so give it a try. You’ll recognize the sound immediately.

sopularity_fax@sopuli.xyz on 26 Nov 17:39 collapse

Do you know anything about interpolation? Offtopic but your comment likening it to frame rate reminded me of it

Assassassin@lemmy.dbzer0.com on 26 Nov 17:52 collapse

Uhh, a little bit, but not much. From my understanding, it’s essentially the opposite process to down sampling. The goal is to take something at a lower sample rate and increase the sample rate while attempting to restore lost harmonic information. Essentially, it’s a very highly engineered system for guessing what harmonic information is missing and recreating it.

This can be done because low frequency sampling of higher frequencies removes the higher frequencies, but doesn’t remove their effects on lower frequencies. Due to a phenomenon called a Nyquist reflection, harmonics above the Nyquist limit are “reflected” back down the spectrum. Please don’t ask why it works that way, I for sure can’t explain it well. This trace evidence is a large part of where the down sampling distortion is created, and means that a computer can likely figure out what frequencies are missing from a given piece of audio by looking at the harmonic info present, and assuming that if a sound has a fundamental, 3rd harmonic, 5th harmonic, and 7th harmonic before the Nyquist frequency, it probably had a 9th, 11th, and 13th harmonic above the Nyquist limit. Again, I’m not incredibly well versed in it, so this is kind of my own interpolation of how it works.