Monday, May 14, 2012

Dubstep bass sound design resources

There's so many brilliant synths, but its still hard to get the right sound? Yup. Why? Sound design is art, it takes skill and patience.

The following is a collection of resources that I find useful when thinking about designing timbres and sounds:

Rob Papen - Blade: This youtube video gives insight into using 96 sine partials to create a range of timbres. Rob explains pretty much exactly what each control of the generative section (called Harmolator or sum rubbish like that).

Risky Dub - Cinema Bassline (Skrillex): A 1 minute tutorial on using Massive to get the "Cinema" bassline timbre. Pretty awesome, the guy even finds time to talk.

Risky Dub - Timemwarp lead (Sub Focus): 2 minute tutorial on getting a pretty fat pitch-crazy lead

Risky Dub - Generic Electro Dub sounds: An 8 minute tutorial on basslines using massive. Multi purpose application of the sounds is possible, they're just heavy fat lead / bass timbres :)

Risky Dub - Doctor P / Flux Pavilion bass: Tutorial on getting that "vocal" bassline going, using Massive.

Nizmo Tutorials - Bass Canon (Flux Pavilion): Gives 7 minutes of heavy bassline madness, no explaination though.. just gotta follow the screenclicks.

Byzanite TV - Format filtering in Massive: Name says it all, make Skrillex, Borgore, and Flux your bitch.


That's all for now, but I'll probably post a couple more links as I find them!


Sunday, May 13, 2012

SamplV1 : A new sample synth

There's a new synth on the block: its called Samplv1.
Checkout the authors announcement: https://www.rncbc.org/drupal/node/488

Its pretty epic, with various little dials to tweak that give great textured sound.
In a huff I threw together this (pretty complex sound) using just one input sample:

Link to a synth sound thrown together in Samplv1
I don't have a MIDI keyboard with me at the moment, but I intend to play with Samplv1 a lot more once I get some input device :)

Thursday, May 10, 2012

FAUST: Slightly more advanced sawtooth gen with ADSR

Hey All,

Today I figured out how to use the FAUST libraries ("import" statements). In the source code for faust, there's a folder called "architecture", and there are files called "music.lib", "oscillators.lib", "effects.lib" etc...

These files can be included, and then you can use the functionality included in them! The following code shows how to create a sawtooth oscillator, and then apply and ADSR to the signal. Pretty simple, but very educational :)

Code:

// import the needed libraries
import("music.lib");
import("oscillator.lib");

// generate a sawtooth wave, using "saw1" from "oscillator.lib"
sawGen = saw1( vslider("Freq", 60, 0, 127, 0.1) );

// get variables for A D S and R stages, using sliders
attack     = 1.0/(SR*nentry("[1:]attack [unit:ms][style:knob]", 20, 1, 1000, 1)/1000);
decay      = nentry("[2:]decay[style:knob]", 2, 1, 100, 0.1)/100000;
sustain    = nentry("[3:]sustain [unit:pc][style:knob]", 10, 1, 100, 0.1)/100;
release    = nentry("[4:]release[style:knob]", 10, 1, 100, 0.1)/100000;

// set the process function, using the button to trigger the ADSR, and then volume scale the sawtooth
process =  button("play"): hgroup("", adsr(attack, decay, sustain, release) : *(sawGen));


GUI using JACK & GTK:


The comments there should tell you what it does, and the compilation is the same as the last tutorial... Enjoy the sawtooth madness!

Saturday, May 5, 2012

Creating DSP units using Faust

Today we tackle writing DSP code, and then compiling it into a format that is usable for your platform & workflow. That means writing the DSP code once, and compiling it for each platform as nessisary, a big time saver. To do this we have Faust: Functional AUdio STream. Its a high level domain specific programming language to write audio processing code in :)

Best bit: there's an online compiler for you to play with before any commitment! http://faust.grame.fr/index.php/online-examples

So you want to compile plugins locally? Good! Grab the faust package from your distro, it will install the compiler & some extra bits we need.

Now go to your faust programming folder, and lets get started:
touch test1.dsp
grab one of the simple examples from the Online Examples, and copy the Faust code into your .dsp file


First we need to compile from the Faust code into valid C++: that means that we needs to have access to all Faust specific things like architecture files etc. An architecture file is a "template" for building a program, JACK + console is one example. Jack & GTK is another. JACK & QT, LADSPA, LV2 etc are also all architecture examples.

Since I'm on Linux with GTK, I'm using the jack-gtk.cpp architecture file like so:
faust -a jack-gtk.cpp bitcrush.dsp -o bitcrush.cpp

Now we run the generated .cpp file trough g++, a standard C++ compiler, and provide it with the nessisary info to link the executable. Simple huh??
g++ -I/usr/lib/faust/ -lpthread `pkg-config --cflags --libs jack gtk+-2.0` bitcrush.cpp  -o bitcrush

Note: I've added an include path there so that the compiler knows what where to look for the architecture files etc, its needed! If you don't you'll get an error like so:
[munch]   fatal error: gui/FUI.h: No such file or directory

Once that's fixed by adding the relevant include paths, it should work :D
Enjoy! -Harry

Saturday, April 28, 2012

AV composition: Using only 3 audio & 3 images

Hey all,

Today we cover how to create an electroacoustic composition using minimal input sounds and images, and transform them to form a new piece. Software that we're gonna use is Audacity and Ardour to compose the soundtrack, and then Blender for the video. Openshot to finish the video editing off ;)

Audio workflow is as follows:
-Open the samples you've got in Audacity
-Modify them (check out the "change pitch / tempo / speed" options)
-Repeat bits (select audio, effects->repeat) to get noises @ a certain pitch
-Export each resulting soundfile that you think is useful

-Open Ardour, create a new session
-Import the created file to the region list
-Start arranging sounds in time
-Add a reverb bus, some sends to it
-Don't be afraid to have lots of little regions

Screeny of Ardour session:

Next export to wav, then load up blender:
-Open a "Video Sequence Editor" pane, and hit "add->Sound"
-"Alt-A" should now play back the audio you've created
-Use markers to identify critical points in the soundtrack
-Open a "Node editor" view, and add lots of fun effects
-Using the "IPO curve editor" and keyframes, animate the effects
-Hit render once in a while to ensure you're getting what you expect

Screeny of Blender session:

Then choose your (video) render / export settings, and let Blender do the grunt work. Now to finish you video, and export it to a usable quality and size, load up Openshot:
-Import your audio & video
-Align them in the timeline
-Add fades / titles / whatever
-Hit export, and choose your target format (web, DVD, youtube, phone)

That should be it :)

Wednesday, March 7, 2012

CSound: Global variables create helper instruments

Quick tutorial on how to make "helper" instruments in csound. Basically rather than running multiple "reverb" opcodes in various instruments, you create a "send" (like in your DAW software), and put a reverb on the send. Later you mix in whatever amount of that send you want into the "master" output.


The screenshot shows some code in QuteCSound, the comments should explain the rest of the theory of the send.


Monday, February 20, 2012

Cross synthesis & time stretching in CSound

The goals:
1. Cross-synthesise the two sounds using Csound’s pvcross opcode.
2. Time-stretch and pitch-shift one of the sounds using Csound’s pvoc opcode

To use any of CSounds phase vocoding opcodes we need to first analyse the files we want to use: its not hard, but you need to know to do it. Open the "Utilites" tab:

Then select your analysis type "PVANAL" is the one we want, after that set your input & output filenames, samplerate, num channels, and a frame size of perhaps 1024.. or there abouts.
Now hit "Run PVANAL" and it should print out the following in the console.
PV analysis output:
util pvanal:
audio sr = 44100, monaural
opening WAV infile adrien.wav
analysing 119879 sample frames (2.7 secs)
1024 infrsize, 256 infrInc
466 output frames estimated
pvanal: creating pvocex file
20
40
[snip]
460
480

484 1-chan blocks written to adrien.pvx
Ok so we've analysed the files needed, now to use them in the .csd file:
Helpful docs online: http://www.csounds.com/manual/html/pvcross.html


Some code like beside here generated some inresting results, but I don't know what the end goal should sound like...

Probably not really like this :(







Time stretching is a bit easier:

 


 Not sayin its fantastic but it works :)

Thursday, February 9, 2012

DnB production: Mastering EQ

Ever listened to some drum and bass? Wondered how they got that sound? A fraction of that sound is what I'm going to explain today. Apart from a whole lot of drum programming, compression, reverb and filtering there's some other "last" steps to take. One of the interesting things I've noticed is quite a lot of "old school" dnb tracks have some hefty master EQing going on.

Check out "Terrorist" by Renegade: Its a DnB classic. First just listen, and estimate what EQing could have been applied. Then listen to the track routing the audio into JAAA (Jack/ALSA Audio Analyser) and checkout this:

You see that?? Wow. Bout -20dB cut all around 10kHz, and another starting around 15kHz all the way up to 17kHz. Sure you ear's response to highs isn't linear: but that's still a hell of an EQing going on. Its all about the brightness of the track right there: that crispy snare snap and ringing hats.

Now check Helicopter by Deep Blue Jungle: its got quite an even response. Note the heavy bass, and that its a sound that is quite transient: it appears, rings and then dims away until the next downbeats. Inbetween those downbeats there's substantial congas, hats, shakers etc going on. Has some high cut applied (possibly shelving) applied starting at just over 15kHz:




Then checkout Dj Hypes "The Chopper", and note that the song is all about the bassline. To make the bass more obvious, there's a -40dB high shelf applied at just over 11kHz, dimming all bright attention grabbing overtones of the fast drums. As the drums are never a very prominent part of the sound, they're just cut out constantly.

Finally we'll look at Badman's "War in '94": It features some quite low bass lines, some saxophone and drums. Also some pad sounds are prominent at certain stages during the song:


All in all what I'm going to do when producing some DnB track, is load up Fons & Nedko's fantastic 4 band parametric EQ with LV2 gui: http://nedko.arnaudov.name/soft/lv2fil/trac/

It provides a new way to listen to the same songs, but without that mastering EQ applied: just reverse it using the above filter (yes the 4th section is disabled in the screeny: turn it on / off to notice the huge difference in brightness in the track!


 Conclusion: Don't be afraid of some hefty EQing, it can really add that final shape to the song: either adding dramatic effect to a certain element of the mix, or reducing its impact to highlight the other parts of the song.


Why another blog?

This blog is dedicated to production techniques, tips & tricks using open source audio software, and the linux platform. More specifically, I'm using Arch linux, with the ArchAudio repos, and some custom compiled software. Its a little techy, but well worth the new and fancy features that are available when using the latest & greatest :)

Hopefully you'll enjoy all resources to be posted on the blog, and have some fun browsing around. Comment & sharing opinions welcome! -Harry