Quickstart
Wolf runs as a single container, it’ll spin up and down additional containers on-demand.
Docker
-
Intel/AMD
-
Nvidia (Container Toolkit)
-
Nvidia (Manual)
-
WSL2
-
Proxmox LXC
Docker CLI:
docker run \
--name wolf \
--network=host \
-e XDG_RUNTIME_DIR=/tmp/sockets \
-v /tmp/sockets:/tmp/sockets:rw \
-e HOST_APPS_STATE_FOLDER=/etc/wolf \
-v /etc/wolf:/etc/wolf:rw \
-v /var/run/docker.sock:/var/run/docker.sock:rw \
--device /dev/dri/ \
--device /dev/uinput \
--device /dev/uhid \
-v /dev/:/dev/:rw \
-v /run/udev:/run/udev:rw \
--device-cgroup-rule "c 13:* rmw" \
ghcr.io/games-on-whales/wolf:stable
Docker compose:
version: "3"
services:
wolf:
image: ghcr.io/games-on-whales/wolf:stable
environment:
- XDG_RUNTIME_DIR=/tmp/sockets
- HOST_APPS_STATE_FOLDER=/etc/wolf
volumes:
- /etc/wolf/:/etc/wolf
- /tmp/sockets:/tmp/sockets:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
- /dev/:/dev/:rw
- /run/udev:/run/udev:rw
device_cgroup_rules:
- 'c 13:* rmw'
devices:
- /dev/dri
- /dev/uinput
- /dev/uhid
network_mode: host
restart: unless-stopped
Make sure that the version of the Nvidia container toolkit is
|
Docker CLI:
docker run \
--name wolf \
--network=host \
-e XDG_RUNTIME_DIR=/tmp/sockets \
-v /tmp/sockets:/tmp/sockets:rw \
-e HOST_APPS_STATE_FOLDER=/etc/wolf \
-v /etc/wolf:/etc/wolf:rw \
-v /var/run/docker.sock:/var/run/docker.sock:rw \
-e NVIDIA_DRIVER_CAPABILITIES=all \
-e NVIDIA_VISIBLE_DEVICES=all \
--gpus=all \
--device /dev/dri/ \
--device /dev/uinput \
--device /dev/uhid \
-v /dev/:/dev/:rw \
-v /run/udev:/run/udev:rw \
--device-cgroup-rule "c 13:* rmw" \
ghcr.io/games-on-whales/wolf:stable
Docker compose:
version: "3"
services:
wolf:
image: ghcr.io/games-on-whales/wolf:stable
environment:
- XDG_RUNTIME_DIR=/tmp/sockets
- HOST_APPS_STATE_FOLDER=/etc/wolf
- NVIDIA_DRIVER_CAPABILITIES=all
- NVIDIA_VISIBLE_DEVICES=all
volumes:
- /etc/wolf/:/etc/wolf
- /tmp/sockets:/tmp/sockets:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
- /dev/:/dev/:rw
- /run/udev:/run/udev:rw
device_cgroup_rules:
- 'c 13:* rmw'
devices:
- /dev/dri
- /dev/uinput
- /dev/uhid
runtime: nvidia
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
network_mode: host
restart: unless-stopped
One last final check: we have to make sure that the nvidia-drm
module has been loaded and that the module is loaded with the flag modeset=1
.
sudo cat /sys/module/nvidia_drm/parameters/modeset
Y
I get N
or the file is not present, how do I set the flag?
If using Grub, the easiest way to make the change persistent is to add nvidia-drm.modeset=1
to the GRUB_CMDLINE_LINUX_DEFAULT
line in /etc/default/grub
ex:
GRUB_CMDLINE_LINUX_DEFAULT="quiet nvidia-drm.modeset=1"
Then sudo update-grub
and reboot.
Unfortunately, on Nvidia, things are a little bit more complex..
Make sure that your driver version is >= 530.30.02
First, let’s build an additional docker image that will contain the Nvidia driver files:
curl https://raw.githubusercontent.com/games-on-whales/gow/master/images/nvidia-driver/Dockerfile | docker build -t gow/nvidia-driver:latest -f - --build-arg NV_VERSION=$(cat /sys/module/nvidia/version) .
This will create gow/nvidia-driver:latest
locally.
Unfortunately, docker doesn’t seem to support directly mounting images, but you can pre-polulate volumes by running:
docker create --rm --mount source=nvidia-driver-vol,destination=/usr/nvidia gow/nvidia-driver:latest sh
It will create a Docker container, populate nvidia-driver-vol
with Nvidia driver if it wasn’t already done and remove the container.
Check volume exists with:
docker volume ls | grep nvidia-driver
local nvidia-driver-vol
One last final check: we have to make sure that the nvidia-drm
module has been loaded and that the module is loaded with the flag modeset=1
.
sudo cat /sys/module/nvidia_drm/parameters/modeset
Y
I get N
or the file is not present, how do I set the flag?
If using Grub, the easiest way to make the change persistent is to add nvidia-drm.modeset=1
to the GRUB_CMDLINE_LINUX_DEFAULT
line in /etc/default/grub
ex:
GRUB_CMDLINE_LINUX_DEFAULT="quiet nvidia-drm.modeset=1"
Then sudo update-grub
and reboot.
For more options or details, you can see ArchWiki: Kernel parameters
You can now finally start the container; Docker CLI:
docker run \
--name wolf \
--network=host \
-e XDG_RUNTIME_DIR=/tmp/sockets \
-v /tmp/sockets:/tmp/sockets:rw \
-e NVIDIA_DRIVER_VOLUME_NAME=nvidia-driver-vol \
-v nvidia-driver-vol:/usr/nvidia:rw \
-e HOST_APPS_STATE_FOLDER=/etc/wolf \
-v /etc/wolf:/etc/wolf:rw \
-v /var/run/docker.sock:/var/run/docker.sock:rw \
--device /dev/nvidia-uvm \
--device /dev/nvidia-uvm-tools \
--device /dev/dri/ \
--device /dev/nvidia-caps/nvidia-cap1 \
--device /dev/nvidia-caps/nvidia-cap2 \
--device /dev/nvidiactl \
--device /dev/nvidia0 \
--device /dev/nvidia-modeset \
--device /dev/uinput \
--device /dev/uhid \
-v /dev/:/dev/:rw \
-v /run/udev:/run/udev:rw \
--device-cgroup-rule "c 13:* rmw" \
ghcr.io/games-on-whales/wolf:stable
Docker compose:
version: "3"
services:
wolf:
image: ghcr.io/games-on-whales/wolf:stable
environment:
- XDG_RUNTIME_DIR=/tmp/sockets
- NVIDIA_DRIVER_VOLUME_NAME=nvidia-driver-vol
- HOST_APPS_STATE_FOLDER=/etc/wolf
volumes:
- /etc/wolf/:/etc/wolf:rw
- /tmp/sockets:/tmp/sockets:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
- /dev/:/dev/:rw
- /run/udev:/run/udev:rw
- nvidia-driver-vol:/usr/nvidia:rw
devices:
- /dev/dri
- /dev/uinput
- /dev/uhid
- /dev/nvidia-uvm
- /dev/nvidia-uvm-tools
- /dev/nvidia-caps/nvidia-cap1
- /dev/nvidia-caps/nvidia-cap2
- /dev/nvidiactl
- /dev/nvidia0
- /dev/nvidia-modeset
device_cgroup_rules:
- 'c 13:* rmw'
network_mode: host
restart: unless-stopped
volumes:
nvidia-driver-vol:
external: true
If you are missing any of the /dev/nvidia*
devices you might also need to initialise them using:
sudo nvidia-container-cli --load-kmods info
Or if that fails:
#!/bin/bash
## Script to initialize nvidia device nodes.
## https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#runfile-verifications
/sbin/modprobe nvidia
if [ "$?" -eq 0 ]; then
# Count the number of NVIDIA controllers found.
NVDEVS=`lspci | grep -i NVIDIA`
N3D=`echo "$NVDEVS" | grep "3D controller" | wc -l`
NVGA=`echo "$NVDEVS" | grep "VGA compatible controller" | wc -l`
N=`expr $N3D + $NVGA - 1`
for i in `seq 0 $N`; do
mknod -m 666 /dev/nvidia$i c 195 $i
done
mknod -m 666 /dev/nvidiactl c 195 255
else
exit 1
fi
/sbin/modprobe nvidia-uvm
if [ "$?" -eq 0 ]; then
# Find out the major device number used by the nvidia-uvm driver
D=`grep nvidia-uvm /proc/devices | awk '{print $1}'`
mknod -m 666 /dev/nvidia-uvm c $D 0
mknod -m 666 /dev/nvidia-uvm-tools c $D 0
else
exit 1
fi
I am still not able to see all the Nvidia devices
You may need to setup your host to automatically load the NVIDIA GPU Kernel modules at boot time.
First, create a new file nvidia.conf
in the /etc/modules-load.d/
directory and open it with a text editor.
nano /etc/modules-load.d/nvidia.conf
paste the following content to the file:
nvidia
nvidia_uvm
For the changes to take effect, update the initramfs
file with the following command:
update-initramfs -u
Add udev rules to add missing Nvidia devices
nano /etc/udev/rules.d/70-nvidia.rules
paste the following content to the file:
# create necessary NVIDIA device files in /dev/*
KERNEL=="nvidia", RUN+="/bin/bash -c '/usr/bin/nvidia-smi -L && /bin/chmod 0666 /dev/nvidia*'"
KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u && /bin/chmod 0666 /dev/nvidia-uvm*'"
reboot
you host and try running ls -l /dev/nvidia*
again.
Running Wolf in WSL2 hasn’t been properly tested. |
You can run Wolf in a very unprivileged setting without uinput/uhid
, unfortunately this means that you’ll be restricted to only using mouse and keyboard.
For Nvidia users, follow the Nvidia instructions above. This should work for AMD/Intel users. |
docker run \
--name wolf \
--network=host \
-e XDG_RUNTIME_DIR=/tmp/sockets \
-v /tmp/sockets:/tmp/sockets:rw \
-e HOST_APPS_STATE_FOLDER=/etc/wolf \
-v /etc/wolf:/etc/wolf:rw \
-v /var/run/docker.sock:/var/run/docker.sock:rw \
--device /dev/dri/ \
ghcr.io/games-on-whales/wolf:stable
At the moment it is only possible to run Wolf inside a privileged LXC. |
First you need to make sure your GPU drivers are installed and loaded on your PVE host.
Also make sure to add the virtual devices udev rules to the PVE host as explained in the Virtual devices support section.
Now, we need to edit LXC config file to passtrough the GPU.
I don’t have a LXC yet
The easiest way to create an LXC to run Wolf is to use tteck’s PVE docker script, this will create you a LXC with docker already good to go. (Make sure you choose privileged LXC and install docker-compose when prompted)
Enter the LXC config file: nano /etc/pve/lxc/1XX.conf
it should look similar to this:
arch: amd64
cores: 8
features: nesting=1
hostname: wolf
memory: 8192
net0: name=eth0,bridge=vmbr0,hwaddr=BC:24:11:B7:90:5D,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-118-disk-0,size=128G
swap: 512
tags: proxmox-helper-scripts
lxc.cgroup2.devices.allow: a
lxc.cap.drop:
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id dev/serial/by-id none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0 dev/ttyUSB0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1 dev/ttyUSB1 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0 dev/ttyACM0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1 dev/ttyACM1 none bind,optional,create=file
remove the following lines from the bottom, they are not needed:
lxc.cgroup2.devices.allow: c 188:* rwm
lxc.cgroup2.devices.allow: c 189:* rwm
lxc.mount.entry: /dev/serial/by-id dev/serial/by-id none bind,optional,create=dir
lxc.mount.entry: /dev/ttyUSB0 dev/ttyUSB0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyUSB1 dev/ttyUSB1 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM0 dev/ttyACM0 none bind,optional,create=file
lxc.mount.entry: /dev/ttyACM1 dev/ttyACM1 none bind,optional,create=file
add these lines to the bottom of the file:
For Nvidia
dev0: /dev/uinput
dev1: /dev/uhid
dev2: /dev/nvidia0
dev3: /dev/nvidiactl
dev4: /dev/nvidia-modeset
dev5: /dev/nvidia-uvm
dev6: /dev/nvidia-uvm-tools
dev7: /dev/nvidia-caps/nvidia-cap1
dev8: /dev/nvidia-caps/nvidia-cap2
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/input dev/input none bind,optional,create=dir
lxc.mount.entry: /run/udev mnt/udev none bind,optional,create=dir
For Intel/AMD
dev0: /dev/uinput
dev1: /dev/uhid
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/input dev/input none bind,optional,create=dir
lxc.mount.entry: /run/udev mnt/udev none bind,optional,create=dir
save the file, exit, and restart your LXC.
Your LXC is good to go, complete the installation based on the GPU following the other tabs.
When creating your docker-compose.yml don’t forget to modify the volume mapping for |
volumes:
- /mnt/udev:/run/udev:rw
And if you have a multi GPU setup, don’t forget to set the below env variable
environment:
- WOLF_RENDER_NODE=/dev/dri/renderD12X
Which ports are used by Wolf?
To keep things simple the scripts above defaulted to network:host
; that’s not really required, the minimum set of ports that needs to be exposed are:
# HTTPS
EXPOSE 47984/tcp
# HTTP
EXPOSE 47989/tcp
# Control
EXPOSE 47999/udp
# RTSP
EXPOSE 48010/tcp
# Video (up to 10 users, you can open more ports if needed)
EXPOSE 48100-48110/udp
# Audio (up to 10 users, you can open more ports if needed)
EXPOSE 48200-48210/udp
Moonlight pairing
You should now be able to point Moonlight to the IP address of the server and start the pairing process:
-
In Moonlight, you’ll get a prompt for a PIN
-
Wolf will log a line with a link to a page where you can input that PIN (ex: http://localhost:47989/pin/#337327E8A6FC0C66 make sure to replace
localhost
with your server IP) -
In Moonlight, you should now be able to see a list of the applications that are supported by Wolf
If you can only see a black screen with a cursor in Moonlight it’s because the first time that you start an app Wolf will download the corresponding docker image + first time updates. |
Virtual devices support
We use uinput to create virtual devices (Mouse, Keyboard and Joypad), make sure that /dev/uinput
is present in the host:
ls -la /dev/uinput
crw------- 1 root root 10, 223 Jan 17 09:08 /dev/uinput
input
sudo usermod -a -G input $USER
udev
rules under /etc/udev/rules.d/85-wolf-virtual-inputs.rules
# Allows Wolf to acces /dev/uinput
KERNEL=="uinput", SUBSYSTEM=="misc", MODE="0660", GROUP="input", OPTIONS+="static_node=uinput"
# Allows Wolf to access /dev/uhid
KERNEL=="uhid", TAG+="uaccess"
# Move virtual keyboard and mouse into a different seat
SUBSYSTEMS=="input", ATTRS{id/vendor}=="ab00", MODE="0660", GROUP="input", ENV{ID_SEAT}="seat9"
# Joypads
SUBSYSTEMS=="input", ATTRS{name}=="Wolf X-Box One (virtual) pad", MODE="0660", GROUP="input"
SUBSYSTEMS=="input", ATTRS{name}=="Wolf PS5 (virtual) pad", MODE="0660", GROUP="input"
SUBSYSTEMS=="input", ATTRS{name}=="Wolf gamepad (virtual) motion sensors", MODE="0660", GROUP="input"
SUBSYSTEMS=="input", ATTRS{name}=="Wolf Nintendo (virtual) pad", MODE="0660", GROUP="input"
What does that mean?
KERNEL=="uinput", SUBSYSTEM=="misc", MODE="0660", GROUP="input", OPTIONS+="static_node=uinput"
Allows Wolf to access /dev/uinput
on your system.
It needs that node to create the virtual devices.
This is usually not the default on servers, but if that is already working for you on your desktop system, you can skip this line.
SUBSYSTEMS=="input", ATTRS{id/vendor}=="ab00", MODE="0660", GROUP="input", ENV{ID_SEAT}="seat9"
This line checks for the custom vendor-id that Wolf gives to newly created virtual devices and assigns them to seat9
, which will cause any session with a lower seat (usually you only have seat1
for your main session) to ignore the devices.
SUBSYSTEMS=="input", ATTRS{name}=="Wolf X-Box One (virtual) pad", MODE="0660", GROUP="input" SUBSYSTEMS=="input", ATTRS{name}=="Wolf PS5 (virtual) pad", MODE="0660", GROUP="input" SUBSYSTEMS=="input", ATTRS{name}=="Wolf gamepad (virtual) motion sensors", MODE="0660", GROUP="input" SUBSYSTEMS=="input", ATTRS{name}=="Wolf Nintendo (virtual) pad", MODE="0660", GROUP="input"
Now the virtual controllers are different, because we need to emulate an existing brand for them to be picked up correctly, so our virtual controllers have a vendor/product id resembling a real controller. The assigned name instead is specific to Wolf.
You can’t assign controllers a seat however (well - you can - but it won’t have the same effect), so we just give it permissions where only user+group can pick it up.
Reload the udev rules either by rebooting or run:
udevadm control --reload-rules && udevadm trigger