Appendix L. Programming Modes

The NVIDIA Accelerated FreeBSD Driver Set supports all standard VGA and VESA modes, as well as most user-written custom mode lines; double-scan modes are supported on all hardware. Interlaced modes are supported on all GeForce FX/Quadro FX and newer GPUs, and certain older GPUs; the X log file will contain a message "Interlaced video modes are supported on this GPU" if interlaced modes are supported.

In general, your display device (monitor/flat panel/television) will be a greater constraint on what modes you can use than either your NVIDIA GPU-based video board or the NVIDIA Accelerated FreeBSD Driver Set.

To request one or more standard modes for use in X, you can simply add a "Modes" line such as:

    Modes "1600x1200" "1024x768" "640x480"

in the appropriate Display subsection of your X config file (please see the XF86Config(5x) or xorg.conf(5x) man pages for details). The following documentation is primarily of interest if you compose your own custom mode lines, or are just interested in learning more. Please note that this is neither an explanation nor a guide to the fine art of crafting custom mode lines for X. We leave that, rather, to documents such as the XFree86 Video Timings HOWTO (which can be found at

Depth, Bits per Pixel, and Pitch

While not directly a concern when programming modes, the bits used per pixel is an issue when considering the maximum programmable resolution; for this reason, it is worthwhile to address the confusion surrounding the terms "depth" and "bits per pixel". Depth is how many bits of data are stored per pixel. Supported depths are 8, 15, 16, and 24. Most video hardware, however, stores pixel data in sizes of 8, 16, or 32 bits; this is the amount of memory allocated per pixel. When you specify your depth, X selects the bits per pixel (bpp) size in which to store the data. Below is a table of what bpp is used for each possible depth:

Depth BPP
8 8
15 16
16 16
24 32

Lastly, the "pitch" is how many bytes in the linear frame buffer there are between one pixel's data, and the data of the pixel immediately below. You can think of this as the horizontal resolution multiplied by the bytes per pixel (bits per pixel divided by 8). In practice, the pitch may be more than this product due to alignment constraints.

Maximum Resolutions

The NVIDIA Accelerated FreeBSD Driver Set and NVIDIA GPU-based video boards support resolutions up to 2048x1536, though the maximum resolution your system can support is also limited by the amount of video memory (see USEFUL FORMULAS for details) and the maximum supported resolution of your display device (monitor/flat panel/television). Also note that while use of a video overlay does not limit the maximum resolution or refresh rate, video memory bandwidth used by a programmed mode does effect the overlay quality.

Useful Formulas

The maximum resolution is a function both of the amount of video memory and the bits per pixel you elect to use:

HR * VR * (bpp/8) = Video Memory Used

In other words, the amount of video memory used is equal to the horizontal resolution (HR) multiplied by the vertical resolution (VR) multiplied by the bytes per pixel (bits per pixel divided by eight). Technically, the video memory used is actually the pitch times the vertical resolution, and the pitch may be slightly greater than (HR * (bpp/8)) to accommodate the hardware requirement that the pitch be a multiple of some value.

Please note that this is just memory usage for the frame buffer; video memory is also used by other things, such as OpenGL and pixmap caching.

Another important relationship is that between the resolution, the pixel clock (aka dot clock) and the vertical refresh rate:


In other words, the refresh rate (RR) is equal to the pixel clock (PCLK) divided by the total number of pixels: the horizontal frame length (HFL) multiplied by the vertical frame length (VFL) (note that these are the frame lengths, and not just the visible resolutions). As described in the XFree86 Video Timings HOWTO, the above formula can be rewritten as:


Given a maximum pixel clock, you can adjust the RR, HFL and VFL as desired, as long as the product of the three is consistent. The pixel clock is reported in the log file when you run X with verbose logging: startx -- -logverbose 5. Your X log should contain several lines like:

    (--) NVIDIA(0): Display Device 0: maximum pixel clock at  8 bpp: 350 MHz
    (--) NVIDIA(0): Display Device 0: maximum pixel clock at 16 bpp: 350 MHz
    (--) NVIDIA(0): Display Device 0: maximum pixel clock at 32 bpp: 300 MHz

which indicate the maximum pixel clock at each bit per pixel size.

How Modes Are Validated

During the PreInit phase of the X server, the NVIDIA X driver validates all requested modes by doing the following:

The last three steps are also done when each mode is programmed, to catch potentially invalid modes submitted by the XF86VidModeExtension (eg xvidtune(1)). For TwinView, the above validation is done for the modes requested for each display device.

Additional Mode Constraints

Below is a list of additional constraints on a mode's parameters that must be met. In some cases these are chip-specific.

The following table provides the maximum DAC values for various hardware generations:

  GeForce2 and 3 GeForce4 and newer
HR 4092 8192
HBW 1016 2040
HSS 4088 8224
HSW 256 512
HFL 4128 8224
VR 4096 8192
VBW 128 256
VSS 4095 8192
VSW 16 16
VFL 4097 8192

Here is an example mode line demonstrating the use of each abbreviation used above:

    # Custom Mode line for the SGI 1600SW Flat Panel
    #        name           PCLK  HR   HSS  HSE  HFL  VR   VSS  VSE  VFL
    Modeline "sgi1600x1024" 106.9 1600 1632 1656 1672 1024 1027 1030 1067

Ensuring Identical Mode Timings

Some functionality, such as Active Stereo with TwinView, requires control over exactly which mode timings are used. There are several ways to accomplish that:

Additional Information

An XFree86 modeline generator, conforming to the GTF Standard is available at Additional generators can be found by searching for "modeline" on