Configuration

Wolf is configured via a TOML config file and some additional optional ENV variables

ENV variables

Variable Default Description

WOLF_LOG_LEVEL

INFO

The log level to show, one of ERROR, WARNING, INFO, DEBUG, TRACE

WOLF_CFG_FILE

/etc/wolf/cfg/config.toml

Full path to the config file

WOLF_PRIVATE_KEY_FILE

/etc/wolf/cfg/key.pem

Full path to the key.pem file

WOLF_PRIVATE_CERT_FILE

/etc/wolf/cfg/cert.pem

Full path to the cert.pem file

XDG_RUNTIME_DIR

/tmp/sockets

The full path where PulseAudio and video sockets will reside, this path is expected to be present in the host and mounted in the Wolf container

WOLF_PULSE_IMAGE

ghcr.io/games-on-whales/pulseaudio:master

The name of the PulseAudio image to be started (when no connection is available)

WOLF_STOP_CONTAINER_ON_EXIT

TRUE

Set to False in order to avoid force stop and removal of containers when the connection is closed

WOLF_DOCKER_SOCKET

/var/run/docker.sock

The full path to the docker socket, doesn’t support tcp (yet)

NVIDIA_DRIVER_VOLUME_NAME

nvidia-driver-vol

The name of the externally created Docker Volume that holds the Nvidia drivers

HOST_APPS_STATE_FOLDER

/etc/wolf

The base folder in the host where the running apps will store permanent state

WOLF_RENDER_NODE

/dev/dri/renderD128

The default render node used for virtual desktops; see: Multiple GPUs

WOLF_DOCKER_FAKE_UDEV_PATH

$HOST_APPS_STATE_FOLDER/fake-udev

The path on the host for the fake-udev CLI tool

Additional env variables useful when debugging:

Variable Default Description

RUST_BACKTRACE

full

In case an exception is thrown in the Rust code, this sets the backtrace level

GST_DEBUG

2

Gstreamer debug print, see debugging-tools

TOML file

The TOML configuration file is read only once at startup (or created if not present) and it’ll be modified by Wolf only when a new user is successfully paired.

Default TOML configuration file
hostname = "wolf"   (1)
support_hevc = true (2)
config_version = 2 (3)
uuid = "0f75f4d1-e28e-410a-b318-c0579f18f8d1" (4)

paired_clients = [] (5)
apps = [] (6)
gstreamer = {} (7)
1 hostname: this is the name that will be displayed in the list of hosts in the Moonlight UI
2 support_hevc: when set to false will disable support for HEVC in Moonlight
3 config_version: The version of this config file
4 uuid: a randomly generated UUID, it’s used by Moonlight to know if the current host has already been paired
5 paired_clients: a list of all the Moonlight clients that have succesfully completed the pairing process; it’ll be populated by Wolf and saved to this file.
6 apps: a list of apps, see: Defining apps
7 gstreamer audio/video pipeline definitions, see Gstreamer

Defining apps

Apps defined here will be shown in Moonlight after successfully pairing with Wolf.
You can re-define parts of the Gstreamer pipeline easily, ex:

[[apps]]
title = "Test ball" (1)
start_virtual_compositor = false (2)

[apps.runner] (3)
type = "process"
run_cmd = "sh -c \"while :; do echo 'running...'; sleep 10; done\""

[apps.video] (4)
source = """
videotestsrc pattern=ball flip=true is-live=true !
video/x-raw, framerate={fps}/1
\
"""

[apps.audio] (5)
source = "audiotestsrc wave=ticks is-live=true"
1 title: this is the name that will be displayed in Moonlight
2 start_virtual_compositor: set to True if this app needs our custom virtual compositor (TODO: document this better)
3 runner: the type of process to run in order to start this app, see: App Runner
4 video: here it’s possible to override the default video pipeline variables defined in: Gstreamer
5 audio: here it’s possible to override the default audio pipeline variables defined in: Gstreamer

See more examples in the Gstreamer page.

App Runner

There are currently two types of runner supported: docker and process

Process

Example:

[apps.runner]
type = "process"
run_cmd = "sh -c \"while :; do echo 'running...'; sleep 10; done\""

Docker

Example:

type = "docker"
name = "WolfSteam"
image = "ghcr.io/games-on-whales/steam:edge"
mounts = [
  "/run/udev:/run/udev:ro"
]
env = [
  "PROTON_LOG=1",
  "RUN_GAMESCOPE=true",
  "ENABLE_VKBASALT=1"
]
devices = []
ports = []
base_create_json = """ (1)
{
  "HostConfig": {
    "IpcMode": "host",
    "CapAdd": ["SYS_ADMIN", "SYS_NICE"],
    "Privileged": false
  }
}
\
"""
1 base_create_json: here you can re-define any property that’s defined in the docker API JSON format, see: docs.docker.com/engine/api/v1.40

Gstreamer

In here we define the default pipeline for both video and audio streaming to Moonlight.
In order to automatically pick up the right encoder at runtime based on the user HW we run in order the list of encoders at gstreamer.video.hevc_encoders (and gstreamer.video.h264_encoders); the first set of plugins that can be correctly initialised by Gstreamer will be the selected encoder for all the pipelines.

You can read more about gstreamer and custom pipelines in the Gstreamer page.

Multiple GPUs

When you have multiple GPUs installed in your host, you might want to have better control over which one is used by Wolf and how.
There are two main separated parts that make use of HW acceleration in Wolf:

  • Gstreamer video encoding: this will use HW acceleration in order to efficiently encode the video stream with H.264 or HEVC.

  • App render node: this will use HW acceleration in order to create virtual Wayland desktops and run the chosen app (ex: Firefox, Steam, …​)

They can be configured separately, and ideally you could even use two GPUs at the same time for different jobs; a common setup would be to use the integrated GPU just for the streaming part and use a powerful GPU to play apps/games.

Gstreamer video encoding

The streaming video encoding pipeline is fully controlled by the config.toml file; here the order in which entries are listed is important because Wolf will just try each listed plugin; the first one that works is the one that will be used.

If you have an Intel iGPU and a Nvidia card in the same host, and you would like to use QuickSync in order to do the encoding, you can either:

  • Delete the nvcodec entries under gstreamer.video.hevc_encoders

  • Cut the qsv entry and paste it above the nvcodec entry

On top of that, each single apps entry support overriding the default streaming pipeline; for example:

[[apps]]
title = "Test ball"

# More options here, removed for brevity...

[apps.video]
source = """
videotestsrc pattern=ball flip=true is-live=true !
video/x-raw, framerate={fps}/1
\
"""

In case you have two GPUs that will use the same encoder pipeline (example: an AMD iGPU and an AMD GPU card) you can override the encoder_pipeline with the corresponding encoder plugin; see: gstreamer/issues/1167.

App render node

Each application that Wolf will start will have access only to a specific render node even if the host has multiple GPUs connected.
By default, Wolf will use the env variable WOLF_RENDER_NODE which defaults to /dev/dri/renderD128

If you don’t know which render node is associated with which GPU you can use the following command:

ls -l /sys/class/drm/renderD*/device/driver
/sys/class/drm/renderD128/device/driver -> ../../../../bus/virtio/drivers/virtio_gpu (1)
/sys/class/drm/renderD129/device/driver -> ../../../../bus/pci/drivers/nvidia (2)
1 This line will tell you that renderD128 is a virtual GPU
2 This line will tell you that renderD129 is a Nvidia GPU

Wolf supports also overriding the render node in each single app defined in the config.toml config file by setting the render_node property; example:

[apps.runner]
type = "docker"
name = "WolfSteam"
image = "ghcr.io/games-on-whales/steam:edge"

# More options here, removed for brevity...
render_node = "/dev/dri/renderD129"