Configuration

Wolf is configured via a TOML config file and some additional optional ENV variables

ENV variables

Variable Default Description

WOLF_LOG_LEVEL

INFO

The log level to show, one of ERROR, WARNING, INFO, DEBUG, TRACE

WOLF_CFG_FILE

/etc/wolf/cfg/config.toml

Full path to the config file

WOLF_PRIVATE_KEY_FILE

/etc/wolf/cfg/key.pem

Full path to the key.pem file

WOLF_PRIVATE_CERT_FILE

/etc/wolf/cfg/cert.pem

Full path to the cert.pem file

XDG_RUNTIME_DIR

/tmp/sockets

The full path where PulseAudio and video sockets will reside, this path is expected to be present in the host and mounted in the Wolf container

WOLF_PULSE_IMAGE

ghcr.io/games-on-whales/pulseaudio:master

The name of the PulseAudio image to be started (when no connection is available)

WOLF_STOP_CONTAINER_ON_EXIT

TRUE

Set to False in order to avoid force stop and removal of containers when the connection is closed

WOLF_DOCKER_SOCKET

/var/run/docker.sock

The full path to the docker socket, doesn’t support tcp (yet)

NVIDIA_DRIVER_VOLUME_NAME

nvidia-driver-vol

The name of the externally created Docker Volume that holds the Nvidia drivers

HOST_APPS_STATE_FOLDER

/etc/wolf

The base folder in the host where the running apps will store permanent state

WOLF_RENDER_NODE

/dev/dri/renderD128

The default render node used for virtual desktops; see: Multiple GPUs

WOLF_ENCODER_NODE

$WOLF_RENDER_NODE

The default render node used for the Gstreamer pipelines; see: Multiple GPUs

WOLF_DOCKER_FAKE_UDEV_PATH

$HOST_APPS_STATE_FOLDER/fake-udev

The path on the host for the fake-udev CLI tool

Additional env variables useful when debugging:

Variable Default Description

RUST_BACKTRACE

full

In case an exception is thrown in the Rust code, this sets the backtrace level

GST_DEBUG

2

Gstreamer debug print, see debugging-tools

Where apps will store permanent data?

Since Wolf supports multiple streaming sessions at the same time and each session can run a different app, we need to make sure that each app has its own folder where it can store permanent data.
To achieve this, for each running app, Wolf will create a folder structure like this:

${HOST_APPS_STATE_FOLDER}/${app_state_folder}/${app_title}

and then mount that as the home (/home/retro) for the docker container that will run the selected app.

These 3 variables are defined as follows:

  • HOST_APPS_STATE_FOLDER: defaults to /etc/wolf, can be changed via ENV

  • app_state_folder: defaults to a unique identifier for each client so that every Moonlight session will have its own folder. Can be changed in the config.toml file

  • app_title: the title of the app as defined in the config.toml file

Share home folder with multiple clients

This will break isolation, if you want to connect with multiple clients at the same time you should not share the home folder. You can follow development of that feature here

By default, Wolf will create a new home folder for each client, but if you want to share the same home folder with multiple clients, you can set the app_state_folder to the same value for each paired client; example:

[[paired_clients]]
app_state_folder = "common"               # <-- !!!

[[paired_clients]]
app_state_folder = "common"               # <-- !!!

This way, when running any app from those two clients, they will share the same home folder by mounting ${HOST_APPS_STATE_FOLDER}/common/${app_title} as /home/retro in the docker container.

Change the HOST_APPS_STATE_FOLDER

Changing this folder via ENV variable is not enough; Wolf expects to be able to access files under this folder, so you need to make sure that the new folder is created and mounted in the Wolf container.

For example, if you want to change the HOST_APPS_STATE_FOLDER to /mnt/drive/wolf you need to change the docker run command with:

  • -e HOST_APPS_STATE_FOLDER=/mnt/drive/wolf

  • -v /mnt/drive/wolf:/mnt/drive/wolf:rw

TOML file

The TOML configuration file is read only once at startup (or created if not present) and it’ll be modified by Wolf only when a new user is successfully paired.

Default TOML configuration file
hostname = "wolf"   (1)
support_hevc = true (2)
config_version = 2 (3)
uuid = "0f75f4d1-e28e-410a-b318-c0579f18f8d1" (4)

paired_clients = [] (5)
apps = [] (6)
gstreamer = {} (7)
1 hostname: this is the name that will be displayed in the list of hosts in the Moonlight UI
2 support_hevc: when set to false will disable support for HEVC in Moonlight
3 config_version: The version of this config file
4 uuid: a randomly generated UUID, it’s used by Moonlight to know if the current host has already been paired
5 paired_clients: a list of all the Moonlight clients that have succesfully completed the pairing process; it’ll be populated by Wolf and saved to this file.
6 apps: a list of apps, see: Defining apps
7 gstreamer audio/video pipeline definitions, see Gstreamer

Defining apps

Apps defined here will be shown in Moonlight after successfully pairing with Wolf.
You can re-define parts of the Gstreamer pipeline easily, ex:

[[apps]]
title = "Test ball" (1)
start_virtual_compositor = false (2)
app_state_folder = "some/folder" (3)

[apps.runner] (4)
type = "process"
run_cmd = "sh -c \"while :; do echo 'running...'; sleep 10; done\""

[apps.video] (5)
source = """
videotestsrc pattern=ball flip=true is-live=true !
video/x-raw, framerate={fps}/1
\
"""

[apps.audio] (6)
source = "audiotestsrc wave=ticks is-live=true"
1 title: this is the name that will be displayed in Moonlight
2 start_virtual_compositor: set to True if this app needs our custom virtual compositor (TODO: document this better)
3 app_state_folder: the folder where the app will store permanent data, see: Where apps will store permanent data?
4 runner: the type of process to run in order to start this app, see: App Runner
5 video: here it’s possible to override the default video pipeline variables defined in: Gstreamer
6 audio: here it’s possible to override the default audio pipeline variables defined in: Gstreamer

See more examples in the Gstreamer page.

Override the default joypad mapping

By default, Wolf will try to match the joypad type that Moonlight sends with the correct mapping. It is possible to override this behaviour by setting the joypad_mapping property in the apps entry; example:

[[apps]]
title = "Test ball"
joypad_type = "xbox" # Force the joypad to always be xbox

The available joypad types are:

  • auto (default)

  • xbox

  • nintendo

  • ps

App Runner

There are currently two types of runner supported: docker and process

Process

Example:

[apps.runner]
type = "process"
run_cmd = "sh -c \"while :; do echo 'running...'; sleep 10; done\""

Docker

Example:

type = "docker"
name = "WolfSteam"
image = "ghcr.io/games-on-whales/steam:edge"
mounts = [
  "/run/udev:/run/udev:ro"
]
env = [
  "PROTON_LOG=1",
  "RUN_SWAY=true",
  "ENABLE_VKBASALT=1"
]
devices = []
ports = []
base_create_json = """ (1)
{
  "HostConfig": {
    "IpcMode": "host",
    "CapAdd": ["SYS_ADMIN", "SYS_NICE"],
    "Privileged": false
  }
}
\
"""
1 base_create_json: here you can re-define any property that’s defined in the docker API JSON format, see: docs.docker.com/engine/api/v1.40

Gstreamer

In here we define the default pipeline for both video and audio streaming to Moonlight.
In order to automatically pick up the right encoder at runtime based on the user HW we run in order the list of encoders at gstreamer.video.hevc_encoders (and gstreamer.video.h264_encoders); the first set of plugins that can be correctly initialised by Gstreamer will be the selected encoder for all the pipelines.

You can read more about gstreamer and custom pipelines in the Gstreamer page.

Multiple GPUs

When you have multiple GPUs installed in your host, you might want to have better control over which one is used by Wolf and how.
There are two main separated parts that make use of HW acceleration in Wolf:

  • App render node: this will use HW acceleration in order to create virtual Wayland desktops and run the chosen app (ex: Firefox, Steam, …​). Use the WOLF_RENDER_NODE (defaults to /dev/dri/renderD128) env variable to control this.

  • Gstreamer video encoding: this will use HW acceleration in order to efficiently encode the video stream with H.264 or HEVC. Use the WOLF_ENCODER_NODE (defaults to WOLF_RENDER_NODE) env variable to control this.

They can be configured separately, and in theory you could even use two GPUs at the same time for different jobs; ex: use the integrated GPU just for the streaming part and use a powerful GPU to play apps/games.

This isn’t recommended, it might introduce additional latency and it’s not optimal. HW encoding on modern GPUs is very lightweight and it’s better to use the same GPU for both jobs.

App render node

Each application that Wolf will start will have access only to a specific render node even if the host has multiple GPUs connected.
By default, Wolf will use the env variable WOLF_RENDER_NODE which defaults to /dev/dri/renderD128

If you don’t know which render node is associated with which GPU you can use the following command:

ls -l /sys/class/drm/renderD*/device/driver
/sys/class/drm/renderD128/device/driver -> ../../../../bus/virtio/drivers/virtio_gpu (1)
/sys/class/drm/renderD129/device/driver -> ../../../../bus/pci/drivers/nvidia (2)
1 This line will tell you that renderD128 is a virtual GPU
2 This line will tell you that renderD129 is a Nvidia GPU

Wolf supports also overriding the render node in each single app defined in the config.toml config file by setting the render_node property; example:

[apps.runner]
type = "docker"
name = "WolfSteam"
image = "ghcr.io/games-on-whales/steam:edge"

# More options here, removed for brevity...
render_node = "/dev/dri/renderD129"

Gstreamer video encoding

The easy way to control this is to just edit the env variable WOLF_ENCODER_NODE (defaults to match WOLF_RENDER_NODE in order to use the same GPU for both), this will set the default render node used for the Gstreamer pipelines.

The streaming video encoding pipeline is fully controlled by the config.toml file; here the order in which entries are listed is important because Wolf will just try each listed plugin; the first one that works is the one that will be used.

If you have an Intel iGPU and a Nvidia card in the same host, and you would like to use QuickSync in order to do the encoding, you can either:

  • Delete the nvcodec entries under gstreamer.video.hevc_encoders

  • Cut the qsv entry and paste it above the nvcodec entry

On top of that, each single apps entry support overriding the default streaming pipeline; for example:

[[apps]]
title = "Test ball"

# More options here, removed for brevity...

[apps.video]
source = """
videotestsrc pattern=ball flip=true is-live=true !
video/x-raw, framerate={fps}/1
\
"""

In case you have two GPUs that will use the same encoder pipeline (example: an AMD iGPU and an AMD GPU card) you can override the video_params with the corresponding encoder plugin; see: gstreamer/issues/1167.

Directly launch a Steam game

This has been moved to the steam.adoc page.