Archives for : tips

Synthesis types

Additive (Fourier) synthesis
Amplitude (ring) modulation
FM synthesis
Granular synthesis
Linear Arithmetic (LA) synthesis
PCM sample playback synthesis
Phase Distortion synthesis
Physical modeling synthesis
Realtime convolution and modulation (RCM) synthesis
Subtractive synthesis
Vector synthesis
Wave sequencing synthesis
Wavetable synthesis

Additive (Fourier) synthesis
Every sound in the nature, no matter how complex, can be expressed as a sum of sinewave functions of various frequencies. Those can be partials or harmonics of the original fundamental frequency. Each harmonic is an integer multiple of the fundamental frequency while partial isn’t.


In the image above we have an example of fundamental frequency sinewave and it’s 2nd and 4th harmonics summed to create the final sound.

Now, lets take another example, this time in frequency domain. If we take a short snapshot of the sound of electric guitar and look at it’s spectral characteristic, we will see that it contains peaks at some frequencies, and valleys at others. Just as seen in the image below.


In next millisecond these peaks and valleys move a little bit and go to different frequencies. Now imagine you have a generator that can generate sinewaves at the same frequencies where guitar creates these peaks, and control the volume envelope of each sinewave. This generator is exactly what additive synthesizer does.

Such synthesizer has a bank of oscillators which are tuned to multiples of the base frequency (harmonics). And each oscillator has its own volume envelope. The more realistic you want additive synthesizer to be, the more oscillators you need.


The name Fourier synthesis comes from Jean Baptiste Joseph de Fourier who (among many other things) found out that every sound can be formed from summation of sine waves. The most known additive synthesizers are Kawai K-5 and later model K-5000 which has over 1000 parameters per patch, so if you like editing for hours, there’s a nice addition to your studio setup.

Amplitude (ring) modulation
In general, modulation is the process of varying a carrier signal (usually sinusoidal signal) with a modulating signal. This can be done in three ways, by modulating: phase, frequency or amplitude of the signal. A device that performs this modulation is a modulator. What we will cover in this article is amplitude (ring) modulation.


Image above shows us typical amplitude modulator. Let’s assume that we bring two sinusoidal signals at modulator’s inputs. The first one (f1) has a frequency of 1000 Hz, and second (f2) one has a frequency of 100 Hz. In mathematical terms, what amplitude modulator does is multiplies two input signals. Please keep in mind that we don’t talk about multiplying numbers (in this case 100 and 1000), we are talking about multiplying sine waves. This is completely different story, and a little bit more complicated. If you are interested into this, get a math book and read about multiplying of two sine waves. Since i don’t want to bother you with too much ”why stuff”, lets just say that at modulator’s output we will have their sum and difference: f1+f2 and f1-f2, which means 1100 Hz and 900 Hz respectively (these are the frequiencies, not plain numbers). Spectrogram below shows us result of mixing f1 and f2 inside amplitude modulator.


Time domain
Change of carrier’s amplitude in a function that depends by the level of modulating signal results in a process we call amplitude modulation. This can clearly be seen on the image below. Vertical axis shows the amplitude, while horizontal shows the time (this is a typical waveform display). Modulating signal of human voice (image 1) modulates the amplitude of the carrier (image 2) which results in modulation (image 3).


carImage 2-carrier

outImage 3-modulation

These images are showing a time frame of only few milliseconds, just to show you a brief conception of the mixing process inside amplitude modulator. Image below shows us combination of image 1 and image 3, so that you can see it in the most simple way how human voice modulates the amplitude of the carrier.


Frequency domain
Lets take a look at spectral characteristics of the same example. We took human voice which was about 3 kHz wide and mixed it with a carrier whose frequency is 10 kHz. At modulator’s output we get two side bands which contain the same information, but are mirrored against each other. The mirror itself is the carrier frequency of 10 kHz. Those two side bands have names: upper and lower side band (USB and LSB). The upper sideband is the same human voice, but transposed to 10 kHz, while to lower sideband is the inverted human voice. More on inversion later.

Capture1Image 4 – Human voice

Capture2Image 5 – Amplitude modulation at 10 kHz

If you want to have fun with transposed human voices, all that is left now is to use sharp filter to remove the lower sideband. What you have on above image marked as USB is actually a human voice transposed to 10 kHz. To have more useable ”weird voices”, i recommend lower carrier frequencies, max 3 kHz, and you can get all sorts of Donald Duck and space voices.

Spectrum inversion
Ever wondered how would this song sound if you could invert it in frequency domain (tones that were low would now be high, and those tones that were high would now be low)? Well, if you understood the process of amplitude modulation, you can do it too. Here is short example: Take a song and apply a strong 8 kHz low pass filter. This is needed to put the song inside a limited frequency band to avoid aliasing problems later. Mix this song with 8 kHz carrier inside amplitude modulator. Now you got two copies of the song. One is in the range 0-8 kHz, and the other one is in the range 8-16 kHz. The first one is inverted in frequency domain, while the second one is the same as original, but transposed to 8 kHz. Now all that is left is to apply strong 8 kHz low pass filter to remove the upper sideband, and you got frequency inverted song.

Ring or Amplitude modulation?
Both names are correct, however if you need to choose the more appropriate one, it would be ring modulator. Because when you say ring modulator is exactly known what do you mean by that: An analog circuit made of diodes, which usually has a shape of the ring and multiplies two input signals.

Amplitude modulator does that too, but not always. Reason for this is there are few different types of amplitude modulators. For example in radio transmission techniques amplitude modulator does not only have sum and difference at the output, but also a carrier signal. This is the most common amplitude modulator in the world, and the whole AM radio broadcast is based on it. If used in our example, on image above there would be a large signal present at 10 kHz with amplitude about twice bigger than any of the sidebands. But as you can see, there is nothing at 10 kHz, because we used a pure ring modulator.

If we look at the math, amplitude modulation will give us only the sum and difference of input signals. Thus amplitude modulation is correct name too, but to avoid any confusion with radio broadcast technology it is better to use term ”ring modulator”. Hint: carrier was in a way a byproduct of early amplitude modulators, but turned out useful for broadcasters, and is used to drive an AGC (automatic gain circuit) in an old type AM radios, so that the signal doesn’t fade that much in the volume during various ever changing atmospheric conditions).

FM synthesis
In FM synthesis, one (or more) oscillator is used to modulate frequency of another one. Although both oscillators are using simple waveforms (like sine wave), result can be a sound with very complex harmonic structure. Usually one oscillator we call modulator, and the other one carrier.


As seen in two examples on the image above, the complexity of result wave always depends on the output level of the modulator (marked with red). If we increase the level of carrier, we are just increasing overall sound volume. In first example (left) modulator’s level is set to 0. Resulting tone is the same as carrier. In the second example (right) we increased modulator’s output level to 10, which resulted in a tone that is totally different from both modulator and carrier tones.

Using different levels for modulation we are creating different harmonic structures at the output. However this is not enough, because each instrument have a characteristic way in which it’s sound changes during the time. This is called the envelope. For example guitar begins loud and then gradually reduces its volume and harmonic content. On the other hand Hammond organ maintains the same volume and harmonic content as long as you are holding the key. As you can see, these two instruments have different envelopes. That’s the reason why FM synthesizers (like Yamaha DX7, SY-77 etc.) have envelopes on each oscillator. A package of oscillator + envelope is usually called an Operator. Operators can be arranged in many different ways called algorithms. What we described above is the simplest algorithm consisting of one modulator and one carrier.


2 To create more complex sounds, you need more than one operator. In that case you can have an operator that is modulating another operator, which is again modulating another operator that modulates the lowest operator (which is carrier), as seen on algorithm example 1. Or you can have three operators at the same time modulating one operator, as seen on algorithm example 2.


Simple FM modulation
Some analog synthesizers do have FM, however this is just a basic FM with a lot of restrictions. To do proper FM you need more than two oscillators, and each needs to have its own volume envelope, plus many other things that are too complex to be properly implemented in analog synthesizer. It would cost too much just to sound as DX-7, which you can buy for much less. However, if you are good in FM programming and have analogue synth which features FM, you can do some nice FM sounds (bells, metals, etc.). However to achieve a true DX sound you need PM or Phase Modulation. This is what drives all of the FM synthesis types of synths that we have on the market.

FM emulation
Even if your synthesizer does not have any kind of FM, but has enough fast LFO you can create a primitive kind of frequency modulation (that is modulation, not FM synthesis). Reason for this is that technically pitch modulation is the same thing as frequency modulation (FM). So with LFO you can create frequency-modulated sound. Keep in mind that this is all you can get, and this is far away from FM synthesis. Take the LFO and set it to high speed. Route it to modulate the pitch of oscillator. Now if possible, apply envelope to modulate the output of the LFO. If not possible, then use LFO Fade function. You need the Fade Out function. Its Purpose is to reduce the output level of LFO to zero after a short time. If LFO has a delay, you can set it to hold the LFO at maximum level, and then let Fade function fades it away. We talk about very short times here for the delay (50-200ms) and fade about 300-1000 ms. Experiment here. For oscillator (wave generator) waveform, choose sine wave. For LFO also choose a sine wave. Trigger high tones on the keyboard, and adjust the amount of LFO modulation that you are applying to pitch modulation until you are satisfied with the result. With enough fast LFO and good Fade function, you should be able to create a few nice bell sounds.

Granular synhesis
In granular synthesis samples are split in small pieces of around 1 to 100 milliseconds in length these small pieces are called grains. Multiple grains may be layered on top of each other all playing at different speed and volume. You can imagine it like some kind of wavetable synthesis, but here samples are played so short that you hear them as a timbre, not as a rhythm. By varying the waveform, envelope, duration and density many different sounds can be produced, not possible by any other synthesis type. There are many PC programs which do granular synthesis. One of the more famous ones is definitely Kaivo by Madrona Labs.


Linear arithmetic (LA) synthesis
Introduced by Roland D-50 model in mid 80’s. At that time biggest problem in sample playback was limited memory. If you could build a sample player with individual samples, it would cost enormously, because chip ROM sizes on the market were ridiculously low and expensive. Some observations on human hearing showed that most important thing in defining each sound unique to other was the attack transient of a sound. That is exactly how D-50 worked. It used short sampled attack transients and analog style oscillators for the sustained part of the sound. Short samples didn’t required big memory, which reduced the cost of the synth.


Today this kind of synthesis is probably no longer needed, however it still sounds unique. We can add that analog emulation part in D-50 is awesome making it really powerful and thick sounding digital poly synth.

PCM sample playback synthesis
Once analog signal gets converted into digital through sampling (digitizing) process, the result is called a sample. Pulse Code Modulation (PCM) is the coding technique used in this process. PCM is used in all digital instruments, and digital devices like PC, mobile phones, etc. Example of PCM could be ‘.wav’ and ‘.aif’ types of files on your PC. Sampling is a very simple process. You take the instrument, connect it to  soundcard input and use recording application that will digitize it, and turn it into PCM. The core of this process is happening in the soundcard inside analog to digital converter. The better converter you have, the better results. Four parameters define the quality of A/D converter. Sampling rate, bit depth, dynamic range and signal to noise ratio. Sample is then stored in the memory (RAM / hard disk).


A device that is capable of performing functions of sampling and storing is called a sampler. If a device can play back those samples at different pitches, we call it a sample playback synthesizer. About 90% of today’s synthesizers are of this kind and they all use subtractive synthesis method. Some samplers have a lot of advanced functions previously found only on synthesizers. Among most popular of them were the Emulator E4, Roland S-760, Akai S3000, and Yamaha’s A series.

Phase distortion synthesis
Phase distortion synthesis is a synthesis method introduced in 1984 by Casio in its CZ range of synthesizers, and similar to phase modulation synthesis in the sense that both methods dynamically change the harmonic content of a carrier waveform by application of another waveform (modulator) in the time domain. Casio introduced the term ‘phase distortion’.


From programmer’s point of view, what happens here is that every waveform has a range of distortion which when set at value 0 results in a pure sine wave, and when set at max, results in a waveform selected on the front panel (ie. a saw, square, etc). Multi stage envelopes can be used to sweep back and forth between these two extreme points, resulting in a timbre change. Essentially this is how phase distortion operates. Results are pretty unique though. There are some PD demos on this site in the Store area.

Physical modeling synthesis
As the power of DSP processors advanced, it was possible to do the synthesis of sound by using a set of equations and algorithms to simulate a physical source of a sound. This method mathematically models individual instruments and their parts, for example – metal string, a body of acoustic guitar, a pluck, etc. All this can be described by mathematical means.


First physical modeling synth was Yamaha VL-1. Later came out Korg Prophecy, and the Z1. Not many more since then, aside Kaivo and some others.

Realtime convolution and modulation synthesis (RCM)
Two synthesizers in the world use this kind of synthesis and those are Yamaha SY-77 (TG-77) and SY-99. This comes as a third type of synthesis they offer next to standard subtractive synthesis (AWM) and frequency modulation synthesis (FM). The name itself sounds complicated, but in reality the process is very simple. There are actually two configurations available.


In first one, you take the whole AWM element (waveform, pitch, filter, env) and insert it as modulator input for FM operator. That is, instead of simple sine wave as modulator, you use whole tone with its own waveform, applied filter and amp. This offers even more complicated FM synthesis.


In second configuration (image above) you can take the whole FM section and feed it into AWM section. That is, the sound that was created in FM section of synth becomes a ‘waveform’ that you process in AWM section. The AWM section is standard subtractive processing line. For example, if you apply a controller to modulate FM section, you can have ‘live’ and constantly changing waveform (marked as ”=” on image above) that is altering its timbre all the time. Of course, then you can apply a filter and envelopes of the AWM section to change the sound in more complex way. I know this all sound exotic, but it requires a lot of programming to do something good and useful actually.

Subtractive synthesis
This is the most common type of synthesis and is used on all analog and digital synthesizers and samplers. It starts with a sound that is sent to filter and then to amplifier. By doing this, you are subtracting some partials that existed in original sound, and you are changing sound’s envelope. This process is in-depth described in synthesizer basics article. Link here.

Vector synthesis
Introduced in 1985 by Chris Meyer, it was totally new concept in sound shaping. When asked about how did he invented it, Chris said: ‘One engineer was asking me to explain how various instruments performed crossfades. I had finished discussing the Fairlight, and had moved on the PPG – explaining its wavetables, and the ability for it to scan a group of waves first in one direction and then back again, While I was scrawling this back and forth motion in my notebook, suddenly a little twinge went off in the back of my head, and my hand drew the next line arcing down the page.. and the concept of crossfading between waves in two dimension, not just one, was born.’


The name of this synthesizer was Prophet VS. It was able to mix four waveforms via joystick and multistage envelope. Other vector type synthesizers included Yamaha SY-22, SY-35, TG-33 and Korg Wavestaion (which is more than just a vector synth). On Yamahas with a joystick you were mixing two FM elements with two sample elements.

Wave sequencing
First introduced by Korg Wavestation, this method offers (as it name says) wave sequencing. A wavesequence is a series of waves (samples), each with its own level, duration, crossfade time (to the next wave), and transpose. Wavesequences can be stepped through automatically or via various modulation sources.


When you set crossfade to low value, you get those characteristic ‘rhythmic’ sequences, which are a trademark for Korg Wavestation. Ensoniq TS series also feature wavesequencing.

Wavetable synthesis
Best examples would be PPG Wave, and Waldorf Wave / Microwave series. Their process of sound creation is based on wave sequencing through a waveform table. It is important to note that these waveforms are single cycled – they are very short. We can imagine them like the storage of the spectral energy of a single cycle snapshot. They are called ‘waves’. These waves can then be combined into lists called ‘wavetables’.


You can apply various controllers like envelopes, LFO’s to select the entry in the wavetable you want to play. It is also possible to interpolate between subsequent waveforms to make the timbral change happen more smooth if desired. Although waveforms are short, you have so many modulation possibilities that no other sampleplayer synth can match.

Synthesizer basics

The structure
Structure of almost all synthesizers is basically the same. It starts with the tone generator whose function is to create the sound. On analog synthesizers, this generator is a simple oscillator circuit, which can generate few basic waveforms like pulse, saw or sine wave. Therefore on analog synthesizers it is more common to use term oscillator than tone generator. On digital synths used terms are: wave gen, wave generator, tone generator, and sometimes, even an oscillator (though it is not a real oscillator inside).


Next comes the filter which defines the timbre of the sound and adds / removes harmonics from the original sound created in oscillator. Filter is followed by an amplifier, in which you set up volume change of the sound. Envelopes and LFO’s are used to manipulate various settings. For example, in oscillator they can control it’s pitch. In filter they can define filter changes over the time.


This was the basic description, however each synth can have it’s own and more complicated structure. Image above shows us the structure of Roland’s XV synthesizer. Each patch can contain up to four tones each with it’s own settings (sounds). Patch also contains common data, which consists of parameters that apply to all four tones like:  patch name, overall level, octave shift, key mode (mono/poly), portamento settings, bender range and more. Patch also contains modulation control (matrix control) in which you specify which controller will change which parameter – for example mod wheel on the keyboard to change amount of cutoff and resonance of the filter or the pitch of the wave generator (WG).

The purpose of oscillator is to produce a sound that you will later process with a filter and amp. Once you press the key on the keyboard, you “activate” the oscillator (in analogue synthesizer it is actually always on). Oscillator (OSC) is the starting point of any synthesizer. It is a place where the waveform is being created. In analog synthesizer, oscilator can be digitally (DCO) or voltage controlled (VCO) and it usually produces pulse or saw wave of which pulse’s width can be controlled and even modulated (PWM). All oscillators, no matter for what application, work in the same way. If we look at their heart will find it looks something like this.


To be described in the simplest way, oscillator is an amplifier and a filter that operate in a loop. The basis of operation is the tuned resonant circuit – for example LC circuit that is made of inductor (L) and capacitor (C). In this circuit voltage and current vary sinusoidally with time and are 90 degree out of phase. There are instants when the current is zero, so the energy stored in inductor is zero, but at the same time the voltage across the capacitor is at it’s peak, while all of the circuit’s energy is stored in the electric field between capacitor’s plates. There are also instants when the voltage is zero and the current is at a peak, with no energy in the capacitor. Then, all of the circuit’s energy is stored in the inductor’s magnetic field. As you can see, the energy stored in this electrical system is swinging between two forms. Unfortunately this ”swinging” won’t go forever due to circuit losses. Conductors have some resistance, as well as capacitor and inductor. That is the reason why we need amplification. The goal of this amplification is not to add some high gain, but just to compensate the losses we have in LC circuit. You can imagine oscillator like pendulum, which due to drag and friction loose it’s movement energy, so you need to kick it from time to time.


At the output of oscillator we can have various waveforms, depending on the type of oscillator that we built. For example if our oscillator is producing a sinusoidal wave, we call it: sinewave oscillator. There are many types of oscillators, most known are shown on image above, and those are (from left to right): sine, triangle, square and saw. Square is actually a pulse wave, with 50% width.

Wave generator
Digital synthesizers have a different kind of “oscillator” in their heart. It is not a classic oscillator that we just described, but a device that uses PCM waveforms. Pulse code modulation (PCM) is a digital representation of an analog signal where the magnitude of the signal is sampled regularly at uniform intervals, then quantized to a series of symbols in a binary code. If you have some wav files on your computer, those are the PCM waveforms, the same that are in your digital synthesizer. Advantage of digital wave generator is that it is not limited to basic waveforms (sine, saw, square, etc.) but can contain any kind of data that was previously digitized. You can have real piano samples, guitars, drums, etc. Of course if you want you can have basic waveforms too. Most digital synths have at least a sine and a saw wave sample inside their waveform memory – so you can create some analog sounding tones too.

Now you might ask, why one needs analog synth, when the same type of waves can be created with digital PCM wave generator. The answer was actually just said is in the previous sentience, it is a word PCM generator. Each time you trigger this type of generator, it will produce exactly the same sounding waveform. Result is uniform tone that stays the same. In contrast, analog synthesizers have imperfect oscillators, which result in various types of fluctuations that are different every time you hit the key. In a sense they sound unpredicted and different on each other key. But that is just the beginning of the story. Once you engage pulse width modulation, there is no digital PCM synth that can enter this area. Then comes the analog filter, which again adds its own character. In short, this is one of the reason why 30 year old synthesizers usually cost more than the latest “state of the art” PCM digital synthesizer with 9000 patches.


It is important to understand that we talk about two different worlds here. It would be pretty naive to blame some digital synth and call it poor because it can’t do some good analog sounds. If someone asks why a PCM synth can’t make a super thundering Moog bass sound, the simplest answer is – it was never designed to so.

The most dramatic change of your sound is taking place in the filter circuit. The richer the harmonic content on the waveform, the more the change. Example of rich harmonic waveforms would be square and saw wave. If your synthesizer does not happen to have resonant filter, you are actually missing one really important and charming aspect of a synthesizer.

If we look at frequency domain of a sound, for example square wave which is rich in harmonics and play a low note at say 20 Hz, we can see that these harmonic components are spread in the whole audio range (20 Hz to 20 kHz). What filter will do is to isolate some parts of this range in order to accentuate it’s other frequencies.

Types of filters
The most know filter is the low pass filter – LPF. It reduces the volume of all frequencies above the cutoff frequency. You can specify this cutoff frequency in filter settings. Once you cut out high frequency range, the sound will become more mellow.

Next type of filter widely known is a high pass filter – HPF. It is doing exactly opposite thing from LPF. It cuts the part of the spectrum which is below the cutoff frequency. It can be useful for percussive sounds (nice analog sounding hi-hats can be made with it). High pass filter can be resonant too, but actually there are not so many synthesizers that feature it. Korg MS-20 one of the rare analogue synths that features analog resonant high pass filter.

An one of the most specific sounding filters is probably the band pass filter – BPF. This filter leaves only the region in the vicinity of the cutoff frequency, and cuts the rest. With resonance setting you are actually shaping the width of this filter. The more resonance, the more narrow this filter will be. Image below shows frequency response of three basic filter types we just described.

Once you activate resonance on a low pass filter, new harmonics will pop up in the lower range, creating new sonic components that didn’t exist in original sound. High levels of resonance can produce self oscillation. Roland’s Juno manual has a very interesting and specific explanation of resonance (i find it really fun to read it today, but actually it is a good explanation):

“This control emphasizes the cutoff point set by cutoff frequency knob. As you raise the knob, certain harmonics are emphasized and the created sound will become more unusual, more electronic in the nature. If you alter the cutoff frequency while the resonance knob is set to a high level, you can create a type of sound that is attainable only from a synthesizer.” – Roland Juno 106 Manual

Filter poles / slopes
Sometimes you will read that a filter is 4-pole type. This is just another term for 24 dB filter slope, which is the most common filter type in the world of analogue synthesizers. Number of poles defines the sharpness of the filter. The more poles filter has, the sharper it’s frequency response will be. This also affects the resonance, since sharper filter results in more powerful sounding resonance. Roland TB-303 uses the not so common, 18dB filter slope. Now you might wonder, what this 18 dB means anyway. It is a unit which tells you how much a filter will block per octave. Which in this case is 18dB. If you put a filter cutoff point to 440 Hz, one octave above at 880 Hz signal will be attenuated for 18 dB (which is about 63 times).


If a filter is 4 pole, the signal will be attenuated 24 dB, which for 440 Hz cutoff point means that the signal on 880 Hz will be about 255 times weaker. Image above shows attenuation curves of four filter types: 6dB, 12 dB, 18dB and 24 dB.

Which filter should you use? The choice is up to you. If you prefer nice smooth filter sweep sounds, you should use 12 dB filter with a good resonance value. For thundering bass sounds or high resonance zaps you should use 24 dB filter (full resonance for zaps). While 6 dB filter is generally not used, it comes hand for sample playback, to gently remove high end from the harsh sounding samples, while not totally distorting the phase of the sample.

Amplifier and envelope
Last stage of sound manipulation that is taking place in the synthesizer is happening at the amplifier section. Its purpose is to control volume changes of the sound. On analog synthesizers amplifier is usually called VCA (if it is voltage controlled) or DCA (if it is digitally controlled). On digital synthesizers amplifier is usually called AMP or in case of Roland it is called TVA, which stands for time variant amplifier. Main part of the amplifier is the ADSR envelope.

ADSR envelope
This stands for: Attack, Decay, Sustain, Release and it represents four points. Once you hit the key, you are at the attack point. With Attack you are setting amount of time that it will take for sound to evolve from its starting level, to the point where you press the key. This is followed by Decay in which sound can evolve to another level that you set. This is followed by Sustain point. As long as you are holding the key pressed, you are at sustain point, and the level you set at sustain point will be the level of the sound during the time the key is held. Once you release the key, the sound will go off. To prevent the sound from going away to soon you can set the Release point. Release sets the amount of time it will take for sound to get to zero level, after you released the key.

There are envelopes with more than four points that we described, but they all work in the same way. Roland’s typical envelope consists of two decay levels. Some Yamaha’s SY series synths such as SY-77 and SY-99 have loop points that you can set to the envelope which is pretty cool feature.

LFO and control
The purpose of LFO is to alter various sound settings in back/forth cyclic manner. Usually LFO can apply change to oscillator’s pitch, filter cutoff frequency and amp level. If you apply LFO to the pitch, you will get vibrato, if you apply it to filter, you get sweeping sound (such as wah-wah), if you apply it to amp level, you get tremolo.

As it’s name implies, LFO is an Low Frequency Oscillator. Usually it consists of a few basic oscillator wave types like sine, saw or square. You can imagine LFO like a helping device, which is helping in way that you don’t have to manually change the pitch or level of the sound. Instead you program LFO to do that. However if you do wish to change some setting manually, then you need to specify them in controller section.

Audio example 1: Tone’s level modulation. Click here to hear original non-modulated sound. LFO was set to modulate amplifier (tone level). LFO’s waveform was sine wave. Result can be heard here.


The two images above show amplitude as a function of time (waveform display). As you can see original sound had constant level (amplitude). Once we applied LFO, level started to modulate from minimum to maximum value. Shape of the sine wave that was modulating original tone can be clearly seen on second image.

Audio example 2: Tone’s frequency modulation. Click here to hear original non-modulated sound. LFO was set to modulate oscillator’s pitch (frequency). LFO’s waveform was sine wave. Result can be heard here.

Image above shows frequency as a function of time (spectral display). It can be clearly seen how applied LFO modulates the pitch (frequency) of the oscillator. Shape of the sine wave that was modulating original tone can be clearly seen. Original tone was fixed frequency at 440 Hz. Once we applied the LFO, tone started to vary the pitch for about 50 Hz above and 50 Hz below original frequency.

Most of digital synths offer you to apply various controllers like mod wheel, aftertouch or velocity to some of the sound’s parameters like filter’s cutoff point or resonance, sound level, etc. Some synths call this feature modulation matrix. It goes something like this. First you specify the source of the controller – for example modulation wheel on the keyboard. Then you specify its destination, for example filter cutoff frequency. Now you set the amount, and you are ready to modulate the filter with modulation wheel. If filter settings on the sound are on the maximum open position, then you need to apply negative value to the controller, so that when you start to move the wheel, filter gets closed, for the amount you specified. Some better modulation matrix systems will allow you to apply almost any synth feature as a source to modulate it’s destination – for example LFO1 to modulate speed of LFO2 which can result in very complex and unpredicted results (this is in case the synth has two LFO’s). This is the area that requires a lot of experimenting, but results are always rewarding.

Abbreviations and common terms


Circuit bending is changing, removing or adding new electronic components within synthesizer to achieve different performance that is unavailable in original version. Usually cheap synths are being circuit bent to sound more wild, unpredicted, strange, or all together. Circuit bending results with warranty void, and can damage your synth permanently.

Legato is a function that should only work in monophonic mode. When Legato is on, pressing one key when another is already pressed causes the currently playing note’s pitch to change to that of the newly pressed key while continuing to sound. This can be effective when you wish to simulate performance techniques such as a guitarist’s hammering on and pulling off strings.

Modulation wheel affects the sound as specified by the control parameters (control matrix). On many synths it is set to vibrato by default.

Portamento is a function that causes the sound’s pitch to change smoothly from one note to the next note played. Portamento is common on guitar, violin and other string instruments. However, portamento is not possible on a fixed pitch instrument like the piano. On a synthesizer, parameter called ”portamento time” or ”portamento speed” defines the speed at which an oscillator moves to a new note you pressed on the keyboard. When the Key Assign Mode is mono, this can be effective in simulating performance techniques such as a violinist’s glissando.

Pitch bender (pitch wheel) bends pitch of the played note up or down, and is spring-loaded to return to center position.