nipoppy process

Note

This command calls the nipoppy.workflows.runner.PipelineRunner class from the Python API internally.

nipoppy process

Run a processing pipeline.

nipoppy process [OPTIONS] [DATASET_ARGUMENT]

Options

--dataset <dpath_root>

Path to the root of the dataset (default is current working directory).

--pipeline <pipeline_name>

Required Pipeline name, as specified in the config file.

--pipeline-version <pipeline_version>

Pipeline version, as specified in the pipeline config file (default: latest out of the installed versions).

--pipeline-step <pipeline_step>

Pipeline step, as specified in the pipeline config file (default: first step).

--participant-id <participant_id>

Participant ID (with or without the sub- prefix).

--session-id <session_id>

Session ID (with or without the ses- prefix).

--simulate

Simulate the pipeline run without executing the generated command-line.

--keep-workdir

Keep pipeline working directory upon success (default: working directory deleted unless a run failed)

--hpc <hpc>

Submit HPC jobs instead of running the pipeline directly. The value should be the HPC cluster type. Currently, ‘slurm’ and ‘sge’ have built-in support, but it is possible to add other cluster types supported by PySQA (https://pysqa.readthedocs.io/).

--write-list <write_list>

Path to a participant-session TSV file to be written. If this is provided, the pipeline will not be run: instead, a list of participant and session IDs will be written to this file.

--tar

Archive participant-session-level results into a tarball upon successful completion. The path to be archived should be specified in the tracker configuration file.

-n, --dry-run

Print commands but do not execute them.

-v, --verbose

Verbose mode (Show DEBUG messages).

--layout <fpath_layout>

Path to a custom layout specification file, to be used instead of the default layout.

Arguments

DATASET_ARGUMENT

Optional argument