Some GPUs include a GPU System Processor (GSP) which can be used to offload GPU initialization and management tasks. This processor is driven by firmware files distributed with the driver. The GSP firmware is used by default on GPUs which support it.
Offloading tasks which were traditionally performed by the driver on the CPU can improve performance due to lower latency access to GPU hardware internals.
Firmware files are built into nvidia_gsp_*_fw.ko
and installed in /boot/modules
. Each GSP firmware module is named
after a GPU architecture (for example, nvidia_gsp_tu10x_fw.ko
is named after Turing) and
supports GPUs from one or more architectures.
The nvidia-smi utility can be used to query the current use of GSP firmware. It will display a valid version if GSP firmware is enabled, or “N/A” if disabled:
$ nvidia-smi -q ... GSP Firmware Version : 560.35.03 ...
This information is also present in a per-GPU sysctl variable.
$ sysctl hw.nvidia.gpus.0.firmware hw.nvidia.gpus.0.firmware: 560.35.03
The driver can be forced to disable use of GSP firmware by setting the sysctl variable hw.nvidia.registry.EnableGpuFirmware=0.
The GSP firmware will be used by default for all Turing and later GPUs. The driver can be explicitly configured to use the GSP firmware by setting the sysctl variable hw.nvidia.registry.EnableGpuFirmware=1.