Producing Bass Tracks Remotely: A Project Analysis By Minutes to Midnight
What This Post Is About
Working remotely has become an accessible and relatively easy way to collaborate for musicians. In this article, I expose my process for producing effective bass tracks for clients based in a different city or country.
The post is based on my work for Antiquity, and their recent single The Far Side Of The Sun. I’ve been collaborating with Gerald Duchene on his amazing music for more than a year now: a perfect example.
Stage 1: Assessment
Regardless if I have an established working relationship with the client, I start with a proper assessment of the material. I listen to the song and have a chat with them. It’s useful to ask for the kind of mood they’re after; or if they have creative ideas about the bass line.
Ultimately, this preliminary work is all about finding out the best way to deliver what the client wants. When I first listen to the audio, I tend to form an idea in my mind. It’s not a defined bass part yet, but sketches who often end up in the final bass track.
The Source Material
The more technical aspect of the assessment is obtaining the source material. Sometimes clients send just a rough mix of the track in a single audio file. In this case, I need to have some information:
- The BPM of the song.
- The SMPTE and fps settings in their DAW project. I need to import the rough mix in my session and avoid the audio drifting because of a wrong synchronisation.
Depending on the client’s level of expertise, I might need to point out a few best practices. I email them a how-to in PDF format, highlighting the fastest and best way to get their project to me.
The Session
Since I’ve been working with Gerald for a long time, we’ve developed an established well-oiled routine:
- He sends a Logic session complete with song markers. This is optional, but nevertheless a great visual help for a quick scan of the song structure.
- All the audio is present, in separate tracks.
- He takes care of rendering virtual instruments or MIDI tracks to audio.
- The same render applies to audio track containing plug-ins, for the sake of an easy back-and-forth exchange.
I don’t actually require having all the separate tracks. If the client has a test track for the bass, it’s sufficient a rough mix with and without it. However, the possibility of taking the volume of groups of instruments up or down is a great help. Typically, I might want to increase the level of drums and percussions, to help with my recording.
When I receive a session, I can decide to keep it in their original format or move to a different software. Although my main DAW is Pro Tools, I sometimes choose to stay in Logic for the sake of speed.

Stage 2: Recording
I don’t have a standard on how many takes I record. My rule is: as many takes as I deem fit. Even if I feel a take is good from start to end, I keep going for a couple more. I might come up with different or more interesting riffs or variations.
Whenever I’m sure about a good take, I write down its number. I might refer to it as the default, later in the comping phase. Whether I record 2 takes or 10, my approach is the same: if I have enough time to work on a bass track, I’ll be recording as many as I can.
I occasionally might want to take breaks. Sometimes I could even stop working on the track altogether and come back to it the day after. The advantage of this approach is fresh ears and a more open vision.
I record the bass through a SansAmp BassDriver DI v2, rarely applying a software pre-amp or amp simulators.
Stage 3: Comping
I don’t listen to all my recordings because I know which ones are the best — usually the last three or four. I’m an exception to the rule the first take is always the best. I tend to use the first takes as rehearsals in my quest to find the perfect track.
I rarely do punch-ins. If I make an unrecoverable mistake, unless there’s something I really want to keep, I stop and delete. If an otherwise perfect take has a wrong passage, there’s going to be some other take which would cover for it.
I put the most interesting takes on top, so that I start with a default good one. In Pro Tools I use playlists, in Logic I adopt its comp edit functionality. They’re both intuitive and easy to use. I prefer Pro Tools because I can select a section and quickly swap between takes in loop mode using a keyboard shortcut. It’s personal taste, Logic’s approach is as good as Pro Tools’.
The comp is done on the whole track, moving forward from start to end. I check each section of the song, following the markers.
I always remember where I played interesting riffs, so I tend to put a particular attention on those. If a nice riff wasn’t executed properly, I record a punch-in to fill in. I do this as a second to last step in the comping phase.
The final step is listening to the track in context with the mix. If I’m happy with the result, I batch fade the segments and commit the entire comp to audio. I hide and deactivate the source, in case I want to go back and tweak something.
Stage 4: Editing
This is the stage that I invariably performin Pro Tools. I prefer working with Avid’s elastic audio rather than Logic’s flex time functionality. After enabling the feature (monophonic, real-time processing), I go in and adjust the transient sensitivity. I want to get to a point where only the correct hits are detected. It’s the best way to avoid horrific artifacts that might come up during editing. This is especially useful in case I recorded the bass using a fretless.

After switching the track view to “Analysis”, I remove excessive transients that might have slipped through the previous step. Once done, I switch to “Warp” view and check if there are obvious mistakes in the timing. I’m in grid mode, usually in 16th, so it’s pretty easy to see which notes are off. I tend to adjust single notes, when they’re obviously out of place, rather than quantize. In case I decide to go for an automated process, I never go beyond an 82% clean-up. I prefer to retain the human factor and my playing. In the video below, you can see a single automatic audio quantize to a specific small selection played as triplets, and a couple of notes adjusted.
I finally switch the elastic audio to X-Form, a rendering-only mode. It’s a high-quality process, compared to real-time. The result is always stunning to my ears. Again, when it’s complete, I commit the audio to a new track, and hide/disable the source.
Stage 5: Mix
EQ
I start mixing by focusing my attention to the relationship between the kick drum and my bass. I work out the fundamental of the kick and try an EQ cut on my track around the same frequency. It gives space to the kick by getting the bass out of the way. I also high-pass my track. Since I only use a 4-string instrument, I let the deepest sub-bass free for whichever sound might be using it.
As a second step, I boost the fundamental of the bass, to which I also add some harmonic enhancing. It’s the most efficient way for the bass guitar to be heard on smaller speakers such as mobile phones, tablets and laptops. I mostly use Waves plug-ins, except for some interventions from Soundtoys.
Compression
The third step is compression. Not a fan of compressing the source channel, except for when I have really extreme peaks or dips. In some cases, I employ Pro Tools’ clip gain directly to the audio. I tend to use compression as a parallel process, oftentimes adopting multiple parallel processes through several busses. I send the signal to one to three aux channels, apply different flavours of gentle to more drastic compression, and blend them with the original through a bass master bus.
My plug-in staples are the 1176 (black), an LA-2A, and a third of my choice depending on the sound and the interpretation I want to convey. Sometimes an RComp, an H-comp or the LA-3A. At times I even add a Distressor emulation or Decapitator. In fact, saturation is a process I pretty much always apply, mildly and in parallel, unless the song really requires a distorted bass.
If I don’t want or need to use parallel processing, I apply compression directly: in this case I always commit to audio at each step. After I added an EQ, I commit; if I add harmonic enhancement I do the same, and so on. At each step, I deactivate and hide the previous source track, to save on CPU and keep a backup.
After my compression is done, sometimes I go on and add a Pultec to the final track, on my master bus.
Send Effects
I almost never apply reverb or delay to the bass. I’ve used both on a song contained in my recent album After 1989, which features a fretless solo. I like to add spacing effects only when the bass is performing an important melodic function. Sometimes even if it’s a particular secondary effect, or if it’s used together with a hard panning. The “synth bass” audible in the second half of the same song is my Warwick played with a pick. I passed it through a stutter effect, a phaser, a panner and a delay.
Stage 6: Delivery
When it comes to delivery, it all depends on what the client sent to me. If I received a simple stereo audio file, I reciprocate by sending a mono track with my bass only. The true peak is between –3 and –6 dB with an RMS of about –12 dB. No matter when my part starts, I always export the audio from the very beginning of the session.
For Antiquity, since I received a Logic session, I bounce my final bass from Pro Tools and import in the original Logic song. I give it a listen and if it sounds good, I save the session and copy it to Dropbox. In the following video example, I’ve found the track to be too hot (but not clipping), so I proceeded to reduce the gain directly in Logic.
Optimising is paramount. To deliver a clean session I export a copy of the session, cleaning up any unused or deactivated tracks. I never deliver a session with my takes, comp or edits: only the mixdown.
Stage 7: Feedback From The Client
Unless I’m working on a larger project, for which I use my Trello system, I have a practical way for the client to give feedback on my mixes.
When I send a direct link to the bounced audio in my Dropbox. clients can listen and add comments that would be attached to the correct position in the song timeline. It’s the same behaviour as in SoundCloud’s comment system.

The Final Result
Listen and Buy on Bandcamp