This article answers some commonly asked questions about the modelling methods in GuitarML’s projects and demonstrates the results of several amp/pedal captures. The limitations of these techniques are also explained.
To return to the main GuitarML website, click
here

What is GuitarML?

GuitarML is an open source, community driven project that uses machine learning to model real amps and pedals. The GuitarML portfolio encompasses several guitar plugins that let you create and play amp/pedal captures in real time on your electric guitar.

All GuitarML software is free and open source, and receives funding through donations from Patreon/Github Sponsors. The modelling techniques used in the plugins are based on research papers from the Aalto University Acoustics Lab in Finland (no affiliation). These papers describe how to use neural networks for black box modelling of guitar effects and amplifiers.

How does the modelling process work?

  1. “Before” and “after” audio recordings are made using the target amplifier or pedal. There are several ways to accomplish this. You can use a signal/buffered splitter to record two tracks simultaneously:
(Image by Author)

Or, you can take a pre-recorded input track and play it through the target device to get your “after” recording. Typically 3+ minutes of recorded audio yields good results.

2. The “before” and “after” recordings are used by the software to capture a digital model. The neural network is made up of layers of parameters which are gradually adjusted to mimic the dynamic response of your amp/pedal. This is a separate step which takes a little coding knowledge, but the goal is to make the process as simple as possible.

3. The resulting model (.json file) can be loaded into the appropriate plugin and played in real-time using an electric guitar and audio interface, or through the NeuralPi hardware.

How good do the captures sound compared to the real amp/pedal?

To explain this, I’ll use a couple of real world examples of pedal and amp captures. The target sound and the modelled sound will be compared using audio and plots of the signals. In each graph below, the purple line is the input guitar signal, the red is the target amp/pedal signal, and the green is the predicted signal from the trained model. Each graph shows approximately 8 milliseconds of audio.

Note: All of the below examples use the Automated-GuitarAmpModelling code, with the stateful LSTM implemented in NeuralPi and Chameleon. For SmartAmp/SmartAmpPro examples, see the “Tech Articles” referenced in the Product Downloads section of the GuitarML website.

Little Big Muff (Distortion/Fuzz/Sustain Pedal)

Little Big Muff Distortion/Fuzz/Sustain pedal
Little Big Muff Pedal

Audio from actual pedal:

Predicted audio from GuitarML capture:

Dumble ODS Clone direct output through load box (High Gain settings)

Dumble ODS Clone (from kit)
Dumble ODS Clone (from kit)

Audio from actual amp:

Predicted audio from GuitarML capture:

Xotic SP (Compressor Pedal)

Xotic SP Compressor Pedal
Xotic SP Compressor

Audio from actual pedal:

Predicted audio from GuitarML capture:

Accuracy is determined by the “error-to-signal ratio”, which is a modified version of MSE (mean squared error). Any loss below 0.05 can be considered a successful capture, but your ear should be the final judge of quality.

What are some limitations?

  1. Currently, only 44.1kHz input audio produces ideal sound through the plugins. The model is optimized at the samplerate used in the input/output recordings. Using 44.1kHz (instead of 48kHz or higher) improves training time and real-time performance, and maintains a minimum of CD-quality audio. You can set your interface to 48kHz, but the sound will be distorted. This could be fixed by implementing an internal samplerate converter in the plugin, which would convert any input samplerate to the ideal rate for the model (44.1kHz). This feature has not yet been implemented in GuitarML plugins.
  2. Currently, only single snapshots of a pedal/amp sound can be captured at a time. Where a real amp has multiple continuous settings (gain knob, EQ knobs, etc.), each capture can only reproduce the sound of a specific combination of settings. (for example, gain at 7, bass at 5, treble at 6). Other techniques can be used to model a range of settings, but this requires multiple recordings at each setting at specific intervals. Future work may attempt to incorporate these techniques.
  3. You can only capture “signal” based effects, as opposed to “time” based effects. Distortion, overdrive, and most compression can be captured because they have a more immediate effect on the guitar signal. Reverb, delay, chorus, flange can’t be captured because the signal is modulated over time. These effects should be added separately to your signal chain.
  4. The capture can only be as good as the recorded audio samples. This is typically not an issue for pedals or devices that you can record with a direct output. However, any noise introduced in the recorded signal will also be captured. For most amplifiers, you will need to use a microphone, which adds it’s own color to the sound. It also introduces speaker/cabinet dynamics, which may or may not be properly captured by the machine learning. These small differences modify the modelled sound from the “true” sound of an amplifier. However, if the recordings are done properly, this effect is negligible for capturing the overall sound of the amp.

Note: You can modify the recorded sample prior to training to attempt to remove noise, change EQ, or add digital distortion/modulation.

Why aren’t all models compatible with every plugin?

The GuitarML plugins were originally experiments into using machine learning for modelling amps/pedals. Several different neural net models were tested in order to find the best one. Even though every model uses the JSON format, the structure of the data is different for each model, and therefore not compatible with models of different types.

Here is a list of the current plugins and their neural net model types:

  1. SmartAmp and SmartPedal: WaveNet
  2. SmartAmpPro: Combination of 1-D Convolution and stateless LSTM
  3. NeuralPi and Chameleon: stateful LSTM

I conducted a trade study while developing the NeuralPi that is helpful for explaining the different strengths and weaknesses of each model:

Model trade study (higher score is better)

The stateful LSTM used in NeuralPi and Chameleon has the highest accuracy, as well as the best real-time performance. CPU usage is comparable to other commercial plugins using traditional modelling methods. Training time falls between WaveNet and stateless LSTM. Limited testing has shown that it can handle high distortion better than the other models, which is ideal for electric guitar. The stateful LSTM is the model that will be used by GuitarML for future development.

The other two models have their own merits. The training used in SmartAmpPro is extremely fast, and can be completed on a CPU (instead of GPU) in under 5 minutes. It can model clean sounds and mild distortion pedals accurately, but has difficulty with amplifiers and higher distortion. The sound can also have a harsh quality not present in the target recording.

WaveNet requires much more computation, and has roughly 4 times more CPU usage compared to other guitar plugins. It handles clean and mild distortion well. If the computer you run it on is not fast enough, the audio buffers can’t complete processing, which results in “glitchy” sound with breaks in the audio. Because of the high number of parameters in the network, training time is the longest of the three model types used by GuitarML.

The training code throws an error when trying to read my sample input/output wav files.

The wav files have to be in a specific format before using it with any of the training codebases. These are the requirements:

  1. Must be in 32FP (32 bit floating point).
  2. Must be 44.1kHz samplerate.
  3. Must be Mono (as opposed to stereo).
  4. Can’t have extra metadata, such as tempo information, which is sometimes added automatically by your DAW.

Note: Most DAW’s have an export feature that allow you to format the exported wav file from your tracks to meet the above requirements.

What is Google Colab and how can I use it to create GuitarML models?

Google Colab is a free python environment in the cloud, where you can run code through a web browser. This environment includes both Tensorflow and Pytorch (the frameworks used to do the machine learning part). This eliminates the need for downloading and installing all the dependencies on your local computer. It requires a Google account to use. After recording your input/output sample wav files, follow these steps:

  1. Download the appropriate .ipynb notebook for Colab from the GuitarML Github repository. For training NeuralPi/Chameleon models (the most advanced GuitarML models) go here. From this Github page, right click the “Raw” button and “Save link as..” to download the Colab script.
  2. Go to the Colab Website.
  3. Click “File” and “Upload Notebook”, and upload the Colab script downloaded from GitHub.
  4. Switch to the GPU runtime to train using GPUs (much faster than CPU runtime). This is done from the “Runtime” dropdown menu at the top of the webpage.
  5. Upload your two sample input/output wav files. You may need to run the “!git clone …” section of code first, in order to create the project directory. The wav files can then be uploaded to the project folder in the left hand menu.
  6. Run each block of code by clicking the small “play” icons for each section in the main view. Follow any commented instructions (green font) before running each block of code. This includes changing the wav file names referenced in the code to match what you uploaded, and naming your model.
  7. When you run the actual training, you should see an output of it’s progress.
  8. When training is complete, you can run the last block of code to create plots for comparing the target signal and the newly created model.

Here is an annotated screenshot for using the SmartAmpPro Colab script. (PedalNet and Automated-GuitarAmpModelling colab scripts will be slightly different)

Example screenshot for using the SmartAmpPro colab script

Plugin Specific Questions

Why isn’t the SmartAmpPro capture feature working for me?

The built in capture feature currently only works in Windows, and then only if Python and Tensorflow are set up to run from the command line. The training script is kicked of from a system command within the plugin, which I have since learned is not a great way to handle things.

Until these kinks are worked out, it is recommended to use the colab training script to train models using the Google Colab website. This eliminates the need for installing dependencies, and the resulting model can still be imported in the plugin as normal.

Why won’t my DAW recognize the SmartAmp and SmartAmpPro plugins on Mac?

SmartAmp and SmartAmpPro were developed and released prior to GuitarML joining the Apple Developer program. These plugins are not signed or notarized, and therefore blocked by the system (you can choose to allow it in the System / Security settings).

There is an issue on the AU releases where an invalid plugin code was used, and the DAW may not accept the AU for that reason. This has been fixed for NeuralPi and Chameleon.

Why does SmartAmp sound glitchy or crackly?

The underlying WaveNet model used in SmartAmp uses a high amount of CPU for DSP processing. If the buffer cannot be processed in time, there are gaps in the output audio, which sounds really bad. There are no plans to attempt to further optimize WaveNet, as the LSTM model provides at least a 4x speed improvement with improved sound and training time. There are no current plans to update SmartAmp to the LSTM model, as using WaveNet is still a novel approach to black box modelling and may be of interest to some people.

A couple of things that help SmartAmp performance:

  1. Train models with less parameters (less layers/channels). This will reduce the accuracy, but will run faster in real-time.
  2. Plug in your laptop and set the system performance to the highest setting.
  3. Set your DAW thread priority to the highest setting.
  4. Increase your buffersize in your audio device settings.

Click below to return to the GuitarML website:

Thank you for reading! For questions not covered here, please email to:

smartguitarml@gmail.com

Engineer, Software developer, Musician, Family man