Training locally from the command line

The command line trainer is the full-featured option for training models with NAM.


Currently, you’ll want to clone the source repo to train from the command line.

Installation uses Anaconda for package management.

For computers with a CUDA-capable GPU (recommended):

conda env create -f environment_gpu.yml


You may need to modify the CUDA version if your GPU is older. Have a look at nVIDIA’s documentation if you’re not sure.

Otherwise, for a CPU-only install (will train much more slowly):

conda env create -f environment_cpu.yml


If Anaconda takes a long time “Solving environment…”, then you can speed up installing the environment by using the mamba experimental sovler with --experimental-solver=libmamba.

Then activate the environment you’ve created with

conda activate nam


Since the command-line trainer is intended for maximum flexibiility, you can train from any input/output pair of reamp files you want. However, if you want to skip the reamping and use some pre-made files for your first time, you can download these files:

Next, edit bin/train/data/single_pair.json to point to relevant audio files:

"common": {
    "x_path": "C:\\path\\to\\v1_1_1.wav",
    "y_path": "C:\\path\\to\\output.wav",
    "delay": 0


If you’re providing your own audio files, then you need to provide the latency (in samples) between the input and output file. A positive number of samples means that the output lags the input by the provided number of samples; a negative value means that the output precedes the input (e.g. because your DAW over-compensated). If you’re not sure exactly how much latency there is, it’s usually a good idea to add a few samples just so that the model doesn’t need to predict the future!

Next, to train, open up a terminal. Activate your nam environment and call the training with

python bin/train/ \
bin/train/inputs/data/single_pair.json \
bin/train/inputs/models/demonet.json \
bin/train/inputs/learning/demo.json \
  • data/single_pair.json contains the information about the data you’re training on.

  • models/demonet.json contains information about the model architecture that is being trained. The example used here uses a feather configured wavenet.

  • learning/demo.json contains information about the training run itself (e.g. number of epochs).

The configuration above runs a short (demo) training. For a real training you may prefer to run something like:

python bin/train/ \
bin/train/inputs/data/single_pair.json \
bin/train/inputs/models/wavenet.json \
bin/train/inputs/learning/default.json \


NAM uses PyTorch Lightning under the hood as a modeling framework, and you can control many of the PyTorch Lightning configuration options from bin/train/inputs/learning/default.json.

Once training is done, a file called model.nam is created in the output directory. To use it, point the plugin at the file and you’re good to go!