When analogue, digital and acoustic sound meet
By Ben Hayes
A Studio 66 modular synthesizer system.
“Analogue” and “digital” are two of the most divisive words in music technology when it comes to electronic instruments. Search the terms on music production forums and you will find pages of discussion about whether a vintage Minimoog is better or worse than the latest Nord or Roland synthesizer. The staunchness of views reflects emotional connections with all kinds of electronic instruments, and in this sense the debate is as subjective as the larger one about “real” acoustic instruments versus electronic ones.
So which is better: analogue or digital? What are the key differences, and why does it matter?
It’s not uncommon to hear the sounds produced by analogue instruments described as “warm” and “rich”, while many claim their digital counterparts sound “cold” and “precise”. This makes sense when you consider what goes on behind the scenes. An analogue synthesizer is filled with physical electronic components which work together to create a continuously changing voltage. This is susceptible to imperfection, variation, and even influence from the environment (they’re notorious for going out of tune in the middle of a performance). A digital synth, on the other hand, is essentially a computer that crunches sums and outputs a precise series of numbers which can be reproduced perfectly time after time.
Imperfections are responsible for the analogue “warmth” and richness. Consistency is responsible for digital “coldness”. It’s no secret that we tend to prefer a degree of inconsistency. Psychological experiments on the “uncanny valley” have shown us that we really don’t like perfect replications. It’s likely the wobbles, deviations and flutters of analogue synthesis have a certain reassuring humanity to them.
Rich in imperfections, analogue synthesizers have much in common with “real” acoustic instruments. A violin string vibrates when it is bowed, acting like an oscillator. The vibrations are passed down the bridge to the body of the instrument, shaping the sound like an incredibly complex filter. Even the tiniest movement, intentional or not, is magnified and projected into the world. At its worst this means that shaky hands can ruin a performance. At its best, however, every tiny aspect of the violinist’s technique coalesces to form an emotional and expressive outpouring of music. A tiny slide into a note, a tastefully placed vibrato, contrasting timbres: all of these minute modes of musical expression, honed through hours of practice, allow an instrumentalist to fill a room with deeply expressive music, each note carrying seemingly infinite depth. In the electronic world, analogue sound approximates this deep but irregular expressiveness.
But this is not to say that analogue sound is somehow more expressive than digital. Just consider the controls available to shape and manipulate any sound on a digital synthesizer: velocity, aftertouch, modulation, pitch bend, expression pedals, breath controllers. What digital synthesis maybe lacks in warmth (and many would disagree that it does) it makes up for in possibilities. When the Yamaha DX7 first arrived on the scene in 1983 it revolutionised the world of synthesis, offering a palette of sounds that had never been heard before. The depth of its FM synthesis engine is still keeping sound designers busy now.
Even five minutes spent browsing the presets of a synth like Omnisphere is enough to convince anybody of the limitless potential of digital tools. They allow us to realize almost any sound imaginable. That’s something no analogue synth can claim, no matter how warm and fuzzy it sounds.
While it’s not fair to claim that analogue or digital is more or less expressive, it is fair to say that — until recently — there were clear tradeoffs. You couldn’t find the breadth of expressive control on an analogue synth any more than you could find that imperfect and vibrationally powerful quality of analogue sound on a digital synth. These divisions enliven the debate about analogue versus digital. They are part of a larger divide: the divide between acoustic sound and electronic sound.
New digital instruments, however, are increasingly blending the best of analogue and digital sounds and even blurring the distinction between acoustic and electronic expression. The last few years have seen the number of ways to interact with electronic sound rise dramatically, with new controllers and interfaces making use of gyroscopes, sensors, pliable surface materials — even brainwaves. Technologically innovative instruments like the Seaboard, Linnstrument, and Haken Continuum are facilitating a new type of “multidimensional” expression.
For the first time, a musician can combine the expressive depth of an acoustic instrument with the enormous palette of sounds in the digital domain. Any sound you can imagine can be molded and manipulated — imperfectly, inconsistently, improvisationally — right at a person’s fingertips. Within a few years the whole premise of the discussion about analogue and digital sound may change. It could well shift from analogue versus digital to: “What kind of digital tool best captures analogue and acoustic sound?”
Ben Hayes is a London-based composer, music producer, and graduate of the Guildhall School of Music and Drama.
Photo Credits: Peter Gorges (Modular Synthesizer), Stefan Kellner (Vintage Minimoog), Tom Driggers (Violin).