Proxmox LXC: Difference between revisions
(→Docker) |
|||
| (8 intermediate revisions by the same user not shown) | |||
| Line 26: | Line 26: | ||
<nowiki>nano /etc/autofs/DrivePool.conf</nowiki> | <nowiki>nano /etc/autofs/DrivePool.conf</nowiki> | ||
DrivePool -fstype=cifs,vers=3.0,rw,relatime,sec=ntlmssp,cache=strict,credentials=/etc/autofs/DrivePool.creds,uid=1001,forceuid,gid=111,forcegid,file_mode=0777,dir_mode=0777,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1 ://IPADDRESS/DrivePool | |||
<nowiki>nano /etc/autofs/DrivePool.creds</nowiki> | <nowiki>nano /etc/autofs/DrivePool.creds</nowiki> | ||
| Line 40: | Line 40: | ||
<nowiki>nano /etc/auto.master</nowiki> | <nowiki>nano /etc/auto.master</nowiki> | ||
/FS1 /etc/autofs/DrivePool.conf --ghost --timeout 0 --verbose | |||
Restart The Service | Restart The Service | ||
<nowiki>service autofs restart</nowiki> | <nowiki>service autofs restart</nowiki> | ||
| Line 90: | Line 90: | ||
Install pve-headers, before and after you do any driver installs on host a below | Install pve-headers, before and after you do any driver installs on host a below | ||
<nowiki>apt install pve-headers</nowiki> | <nowiki>apt install pve-headers</nowiki> | ||
<nowiki>apt install -y pve-headers-$(uname -r)</nowiki> | |||
===NOTES=== | ===NOTES=== | ||
This seems to need to be done sometimes after running updates on the host and rebooting. | This seems to need to be done sometimes after running updates on the host and rebooting. | ||
===Nvidia=== | ===Nvidia=== | ||
====On Host==== | ====On Host==== | ||
[https://www.nvidia.com/Download/index.aspx | |||
<nowiki>apt install -y build-essential gcc-multilib dkms</nowiki> | |||
=====Blacklist Nouveau Drivers===== | |||
<nowiki>nano /etc/modprobe.d/blacklist-nouveau.conf</nowiki> | |||
<nowiki>blacklist nouveau | |||
options nouveau modeset=0</nowiki> | |||
<nowiki>update-initramfs -u</nowiki> | |||
<nowiki>reboot</nowiki> | |||
=====Download Nvidia Driver for your card===== | |||
======Manual====== | |||
[https://www.nvidia.com/Download/index.aspx] | |||
<nowiki>chmod +x NVIDIA-Linux-x86_64-460.84.run</nowiki> | <nowiki>chmod +x NVIDIA-Linux-x86_64-460.84.run</nowiki> | ||
<nowiki>./NVIDIA-Linux-x86_64-460.84.run</nowiki> | <nowiki>./NVIDIA-Linux-x86_64-460.84.run --dkms -s</nowiki> | ||
Create Kernel rules | |||
======Apt====== | |||
<nowiki>nano /etc/apt/sources.list</nowiki> | |||
add to your main repo | |||
<nowiki>non-free</nowiki> | |||
=====Create Kernel rules===== | |||
<nowiki>nano /etc/udev/rules.d/70-nvidia.rules</nowiki> | <nowiki>nano /etc/udev/rules.d/70-nvidia.rules</nowiki> | ||
| Line 107: | Line 134: | ||
KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u && /bin/chmod 0666 /dev/nvidia-uvm*'" | KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u && /bin/chmod 0666 /dev/nvidia-uvm*'" | ||
</nowiki> | </nowiki> | ||
Do a reboot | |||
<nowiki>nano /etc/modules-load.d/modules.conf</nowiki> | |||
Add these lines | |||
<nowiki> | |||
# Nvidia modules | |||
nvidia | |||
nvidia-modeset | |||
nvidia_uvm | |||
</nowiki> | |||
=====Update initramfs===== | |||
<nowiki>update-initramfs -u</nowiki> | |||
=====Do a reboot===== | |||
<nowiki>reboot</nowiki> | <nowiki>reboot</nowiki> | ||
Find the GPU device number. This usually is 226:0,128 | |||
=====Find the GPU device number.===== | |||
This usually is 226:0,128 | |||
<nowiki>ls -l /dev/dri</nowiki> | <nowiki>ls -l /dev/dri</nowiki> | ||
| Line 131: | Line 176: | ||
Take note of the numbers in the fifth column above 195, 236 and 226 respectively. | Take note of the numbers in the fifth column above 195, 236 and 226 respectively. | ||
ADD these lines to /etc/pve/lxc/<container-id>.conf | =====ADD these lines to /etc/pve/lxc/<container-id>.conf===== | ||
<nowiki> | <nowiki> | ||
lxc.cgroup2.devices.allow: c 195:0 rw | lxc.cgroup2.devices.allow: c 195:0 rw | ||
| Line 151: | Line 196: | ||
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file | lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file | ||
</nowiki> | </nowiki> | ||
=====Setup Nvidia persistanced service===== | |||
To avoid that the driver/kernel module is unloaded whenever the GPU is not used, we should run the Nvidia provided persistence service. It’s made available to us after the driver install. | |||
Copy and extract | |||
<nowiki>cp /usr/share/doc/NVIDIA_GLX-1.0/samples/nvidia-persistenced-init.tar.bz2 .</nowiki> | |||
<nowiki>bunzip2 nvidia-persistenced-init.tar.bz2</nowiki> | |||
<nowiki>tar -xf nvidia-persistenced-init.tar</nowiki> | |||
Remove old, if any (to avoid masked service) | |||
<nowiki>rm /etc/systemd/system/nvidia-persistenced.service</nowiki> | |||
Install | |||
<nowiki>chmod +x nvidia-persistenced-init/install.sh</nowiki> | |||
<nowiki>./nvidia-persistenced-init/install.sh</nowiki> | |||
Check that it’s ok | |||
<nowiki>systemctl status nvidia-persistenced.service</nowiki> | |||
<nowiki>rm -rf nvidia-persistenced-init*</nowiki> | |||
====In Container==== | ====In Container==== | ||
Copy the drivers over from the host somehow. | Copy the drivers over from the host somehow. | ||
| Line 156: | Line 223: | ||
<nowiki>./NVIDIA-Linux-x86_64-460.84.run --no-kernel-module</nowiki> | <nowiki>./NVIDIA-Linux-x86_64-460.84.run --no-kernel-module</nowiki> | ||
===AMD Radeon=== | ===AMD Radeon=== | ||
====On Host==== | ====On Host==== | ||
| Line 192: | Line 260: | ||
lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file | lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file | ||
</nowiki> | </nowiki> | ||
==Net Passthrough== | ==Net Passthrough== | ||
This is important if your container needs to be able to create a TUN device. This is useful for applications like openvpn. | This is important if your container needs to be able to create a TUN device. This is useful for applications like openvpn. | ||
| Line 282: | Line 351: | ||
<nowiki>systemctl start apache2.service</nowiki> | <nowiki>systemctl start apache2.service</nowiki> | ||
==LXC elasticsearch== | |||
<nowiki>nano /etc/elasticsearch/jvm.options.d/heap.options</nowiki> | |||
<nowiki> | |||
-Xms4g | |||
-Xmx4g | |||
</nowiki> | |||
<nowiki>systemctl restart elasticsearch.service</nowiki> | |||
==Forward ALSA audio to LXC container== | ==Forward ALSA audio to LXC container== | ||
===Host Machine=== | ===Host Machine=== | ||
| Line 328: | Line 408: | ||
lxc.apparmor.profile: unconfined | lxc.apparmor.profile: unconfined | ||
</nowiki> | </nowiki> | ||
=Reduce the size of a container= | =Reduce the size of a container= | ||
Containers can always be easily increased in size via the web panel. Reducing their size is a slightly more complicated procedure that must be done via CLI. | Containers can always be easily increased in size via the web panel. Reducing their size is a slightly more complicated procedure that must be done via CLI. | ||
Latest revision as of 17:03, 20 April 2023
Proxmox-VE Main Page
Mounts
Mounting a directory available to the host machine
In this example, Drivepool is a cifs share on another machine. The container we are adding this to is 230. The first location is on the host machine, and the second is where you want it located in the container.
The container should be shutdown/stopped before running this command.
pct set 230 -mp0 /mnt/pve/Drivepool,mp=/Drivepool
OR add to the config file located in
/etc/pve/lxc/
mp0: /mnt/pve/DrivePool,mp=/Drivepool
If your shared directory is present on all nodes add
,shared=1
Like this:
mp0: /mnt/pve/DrivePool,mp=/Drivepool,shared=1
With the container stopped, Go to Options and edit the features. Enable CIFS and nesting.
Start the container.
We're going to create some configuration files
mkdir /etc/autofs
nano /etc/autofs/DrivePool.conf
DrivePool -fstype=cifs,vers=3.0,rw,relatime,sec=ntlmssp,cache=strict,credentials=/etc/autofs/DrivePool.creds,uid=1001,forceuid,gid=111,forcegid,file_mode=0777,dir_mode=0777,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1 ://IPADDRESS/DrivePool
nano /etc/autofs/DrivePool.creds
domain=YOURDOMAIN username=YOURUSERNAME password=YOURPASSWORD
Install AutoFS
apt install -y cifs-utils autofs
Add Our Configuration
nano /etc/auto.master
/FS1 /etc/autofs/DrivePool.conf --ghost --timeout 0 --verbose
Restart The Service
service autofs restart
Mount directories from other containers
Method 1 AutoFS + sshfs
With the container stopped, Go to Options and edit the features. Enable FUSE, and nesting.
Start the container.
We're going to create some configuration files
mkdir /etc/autofs
nano /etc/autofs/plex1.conf
bar -fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536 :sshfs\#sysop@10.0.12.90:/your/remote/path
apt-get install sshfs autofs
Now we setup passwordless auth to the other container
ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub sysop@10.0.12.90
It will prompt you for the password.
We'll create the mount directory
mkdir /mnt/sshfs
And add our configuration to autofs
nano /etc/auto.master
/mnt/sshfs --timeout=30,--ghost
Restart The Service
service autofs restart
Method 2 just sshfs
Mount directories from other containers
With the container stopped, Go to Options and edit the features. Enable FUSE, and nesting.
apt-get install sshfs
nano /home/sshfsmounts.sh
echo "yourpass" | sshfs youruser@10.0.15.31:/path/on/remote /mnt/pathyouwant -o password_stdin
crontab -e
@reboot /bin/bash /home/sshfsmounts.sh
Passthrough Physical Disk to Container
Passthrough Physical Disk to Container
Device Passthrough
GPU Passthrough
Install pve-headers, before and after you do any driver installs on host a below
apt install pve-headers apt install -y pve-headers-$(uname -r)
NOTES
This seems to need to be done sometimes after running updates on the host and rebooting.
Nvidia
On Host
apt install -y build-essential gcc-multilib dkms
Blacklist Nouveau Drivers
nano /etc/modprobe.d/blacklist-nouveau.conf
blacklist nouveau options nouveau modeset=0
update-initramfs -u
reboot
Download Nvidia Driver for your card
Manual
chmod +x NVIDIA-Linux-x86_64-460.84.run ./NVIDIA-Linux-x86_64-460.84.run --dkms -s
Apt
nano /etc/apt/sources.list
add to your main repo
non-free
Create Kernel rules
nano /etc/udev/rules.d/70-nvidia.rules
KERNEL=="nvidia_modeset", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -m && /bin/chmod 666 /dev/nvidia-modeset*'" # Create /nvidia0, /dev/nvidia1 … and /nvidiactl when nvidia module is loaded KERNEL=="nvidia", RUN+="/bin/bash -c '/usr/bin/nvidia-smi -L && /bin/chmod 666 /dev/nvidia*'" # Create the CUDA node when nvidia_uvm CUDA module is loaded KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u && /bin/chmod 0666 /dev/nvidia-uvm*'"
nano /etc/modules-load.d/modules.conf
Add these lines
# Nvidia modules nvidia nvidia-modeset nvidia_uvm
Update initramfs
update-initramfs -u
Do a reboot
reboot
Find the GPU device number.
This usually is 226:0,128
ls -l /dev/dri
crw-rw-rw- 1 root root 195, 0 Feb 11 18:11 /dev/nvidia0 crw-rw-rw- 1 root root 195, 255 Feb 11 18:11 /dev/nvidiactl crw-rw-rw- 1 root root 195, 254 Feb 11 18:11 /dev/nvidia-modeset crw-rw-rw- 1 root root 236, 0 Feb 11 18:11 /dev/nvidia-uvm crw-rw-rw- 1 root root 236, 1 Feb 11 18:11 /dev/nvidia-uvm-tools
AND
ls -al /dev/nvidia*
crw-rw-rw- 1 root root 195, 0 Feb 11 18:11 /dev/nvidia0 crw-rw-rw- 1 root root 195, 255 Feb 11 18:11 /dev/nvidiactl crw-rw-rw- 1 root root 195, 254 Feb 11 18:11 /dev/nvidia-modeset crw-rw-rw- 1 root root 236, 0 Feb 11 18:11 /dev/nvidia-uvm crw-rw-rw- 1 root root 236, 1 Feb 11 18:11 /dev/nvidia-uvm-tools
Take note of the numbers in the fifth column above 195, 236 and 226 respectively.
ADD these lines to /etc/pve/lxc/<container-id>.conf
lxc.cgroup2.devices.allow: c 195:0 rw lxc.cgroup2.devices.allow: c 195:255 rw lxc.cgroup2.devices.allow: c 195:254 rw lxc.cgroup2.devices.allow: c 236:0 rw lxc.cgroup2.devices.allow: c 236:1 rw lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
Setup Nvidia persistanced service
To avoid that the driver/kernel module is unloaded whenever the GPU is not used, we should run the Nvidia provided persistence service. It’s made available to us after the driver install.
Copy and extract
cp /usr/share/doc/NVIDIA_GLX-1.0/samples/nvidia-persistenced-init.tar.bz2 . bunzip2 nvidia-persistenced-init.tar.bz2 tar -xf nvidia-persistenced-init.tar
Remove old, if any (to avoid masked service)
rm /etc/systemd/system/nvidia-persistenced.service
Install
chmod +x nvidia-persistenced-init/install.sh ./nvidia-persistenced-init/install.sh
Check that it’s ok
systemctl status nvidia-persistenced.service rm -rf nvidia-persistenced-init*
In Container
Copy the drivers over from the host somehow.
chmod +x NVIDIA-Linux-x86_64-460.84.run
./NVIDIA-Linux-x86_64-460.84.run --no-kernel-module
AMD Radeon
On Host
Download AMD Driver for your card
Extract file, and navigate into the extracted contents.
./amdgpu-pro-install --no-32 --opencl=legacy,rocr -y apt install amf-amdgpu-pro
Find the GPU device number. This usually is 226:0,128
ls -l /dev/dri
ADD these lines to /etc/pve/lxc/<container-id>.conf
lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
In Container
Copy the drivers over from the host somehow.
./amdgpu-install --opencl=legacy --headless --no-dkms --no-32 -y apt install amf-amdgpu-pro
Intel Quick Sync
I've not personally tested this, but:
"
All Intel CPU’s since Sandy Bridge released in 2011 have hardware acceleration for H.264 built in.
So if your CPU supports Quick Sync you can speed up transcoding and reduce load as well as energy consumption.
"
lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
Net Passthrough
This is important if your container needs to be able to create a TUN device. This is useful for applications like openvpn.
ADD these lines to /etc/pve/lxc/<container-id>.conf
lxc.cgroup2.devices.allow: c 10:200 rwm lxc.mount.entry: /dev/net dev/net none bind,create=dir
DVB Passthrough (TV Tuner Cards)
This is useful for applications like tvheadend, nextpvr, plex, emby, jellyfin.
ADD these lines to /etc/pve/lxc/<container-id>.conf
lxc.cgroup2.devices.allow: c 212:* rwm lxc.mount.entry: /dev/dvb dev/dvb none bind,optional,create=dir
USB Passthrough
Find the applicable usb major and minor numbers via:
#: lsusb Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 013: ID 045e:0800 Microsoft Corp. Bus 001 Device 020: ID 04a9:1746 Canon, Inc. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
ADD these lines to /etc/pve/lxc/<container-id>.conf
lxc.cgroup2.devices.allow: c 189:* rwm lxc.mount.entry: /dev/bus/usb/001/020 dev/bus/usb/001/020 none bind,optional,create=file
Application Tricks for running in LXC
Docker
Host
nano /etc/modules-load.d/modules.conf
overlay
modprobe overlay
With the container stopped, Go to Options and edit the features. Enable nesting and keyctl.
Container
Install
curl -fsSL https://get.docker.com -o get-docker.sh sh get-docker.sh
Run NTP server in LXC
lxc.cap.drop = sys_module mac_admin mac_override
Run avahi in LXC
mkdir -p /etc/systemd/system/avahi-daemon.service.d cat <<EOF > /etc/systemd/system/avahi-daemon.service.d/override.conf [Service] ExecStart= ExecStart=/usr/sbin/avahi-daemon -s --no-rlimits EOF
systemctl daemon-reload systemctl start avahi-daemon systemctl status avahi-daemon
Running Ubuntu Snaps in LXC
install squashfuse in the container
apt install squashfuse fuse
then install snap
apt install snap
LXC apache2 NAMESPACE fix
sed -i -e 's,PrivateTmp=true,PrivateTmp=false\nNoNewPrivileges=yes,g' /lib/systemd/system/apache2.service
systemctl daemon-reload
systemctl start apache2.service
LXC elasticsearch
nano /etc/elasticsearch/jvm.options.d/heap.options
-Xms4g -Xmx4g
systemctl restart elasticsearch.service
Forward ALSA audio to LXC container
Host Machine
apt-get update
apt-get install dkms apt-get install libasound2 alsa-utils alsa-oss
root@pve:~# arecord -l **** List of CAPTURE Hardware Devices **** card 0: PCH [HDA Intel PCH], device 0: ALC283 Analog [ALC283 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 1: CameraB409241 [USB Camera-B4.09.24.1], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0
root@pve:~# ls -la /dev/snd total 0 drwxr-xr-x 4 root root 360 Jul 10 23:26 . drwxr-xr-x 24 root root 4300 Jul 10 23:26 .. drwxr-xr-x 2 root root 60 Jul 10 23:26 by-id drwxr-xr-x 2 root root 80 Jul 10 23:26 by-path crw-rw---- 1 root audio 116, 2 Jul 11 08:50 controlC0 crw-rw---- 1 root audio 116, 12 Jul 11 08:50 controlC1 ...
lxc.cgroup.devices.allow = c 116:* rwm lxc.mount.entry = /dev/snd dev/snd none bind,create=dir 0 0
Remove apparmor from a container
There may be times that apparmor prevents a container from running properly (This is rare). If you trust the container, you can edit the configuration file to disable apparmor:
Configs are located in `/etc/pve/lxc`
add to the bottom of the config:
lxc.apparmor.profile: unconfined
Reduce the size of a container
Containers can always be easily increased in size via the web panel. Reducing their size is a slightly more complicated procedure that must be done via CLI.
This process requires you to backup a container, and then restore it with the new size. Note: directories may be different for you.
pct stop <id> vzdump <id> -storage local -compress lzo pct destroy <id> pct restore <id> /var/lib/lxc/vzdump-lxc-<id>-....tar.lzo --rootfs local:<newsize> pct fsck <id>
Restore a Container from storage
pct restore <id> /var/lib/lxc/vzdump-lxc-<id>-....tar.lzo -storage local-zfs
Clone a container
pct clone <id> <newid> --full --storage local-zfs --hostname newhostname