There is a lot of confusion around the topic of loudness normalisation in the audio community.
Even some reputable music producers have had the wrong idea when posting tips to their facebook page.
Loudness normalisation is my favourite topic of the moment, simply because it is why the loudness wars are over.
Because no matter how much you squash your song, it's only going to end up just as loud as everything else.
No. When the platform analyses your audio using their means of doing so, they normalise music to reach their target loudness using a simple gain movement. You can actually test this at home. You only need iTunes and some songs in your library. Simply open iTunes' preferences, turn Soundcheck on and listen to a bunch of different music. When you listen to those songs, click Get Info on that song (Right-click), click File, and see what the 'volume' reading is. That number (in decibels) is the amount of gain they applied to that song to get it to their target level. Examples below.
As you can see from the above example, with Soundcheck on iTunes actually turned this one down by 6dB to reach the target level. This theoretically means that the song lost 6dBs of dynamic range in the mastering process, but is now only as loud as everything else is on playback.
Absolutely not! I love Deadmau5 as much as anyone else. The only point that I am making here is that if he mastered this song now, he wouldn't necessarily need to apply so much compression for the sake of loudness. However, loudness is a sound in itself. His reasoning for doing so could have been just as much an artistic thing as much as a practical loudness thing. The point is, if you are running your song into a limiter, you could be losing dynamic range unnecessarily if the only goal is to be loud. If you are compressing it because you like the sound of that, then by all means!
Neither do I. However, the platforms are adopting loudness normalisation. Soundcheck is just iTunes' loudness normalisation process. It's just an example you can try easily at home.
Depends on your mindset. If the only reason you did that hard work in the first place was because of the now old narrative that your song has to 'compete' with everyone else's, then it makes sense that you might think so. The mindset now should be that you don't have to worry so much about making it loud because loudness normalisation will take care of that anyway (to an extent). Meaning you can make more dynamic masters without the fear of it sounding too quiet.
Incorrect. You actually need mastering now more than ever. The role of the mastering engineer now will be what it always was before all of this, which is making masters that translate. Today's mastering engineer is on the pulse about this stuff which is still quite new and not necessarily set in stone.
As promised in the title of this blog, the below infographic is the best way to familiarise yourself with the current situation. You could also do a lot worse than to keep an eye on Ian Shepherd's blog to stay up to date. Or you can just hire a mastering engineer if you don't want to think about all of this. I wouldn't blame you at all! If you are mastering at home, this tutorial should help arm you with the know-how of what to be aiming for and how to achieve it.
UPDATE 19/05/17: It seems Spotify have lowered their target level to around -14 LUFS integrated which I tested sing a Metallica song. Others have confirmed also.
The source code for the frontend of this website is available on GitHub.