Proxmox LXC: Difference between revisions

From Deathbybandaid Wiki
Jump to navigation Jump to search
(Created page with " ==Proxmox-VE Main Page== Proxmox-VE =Mounts= ==Mounting a directory available to the host machine== In this example, Drivepool is a cifs share on another machine. The container we are adding this to is 230. The first location is on the host machine, and the second is where you want it located in the container. The container should be shutdown/stopped before running this command. <nowiki>pct set 230 -mp0 /mnt/pve/Drivepool,mp=/Drivepool</nowiki> OR add to the confi...")
 
Line 224: Line 224:
=Application Tricks for running in LXC=
=Application Tricks for running in LXC=
==Docker==
==Docker==
With the container stopped, Go to Options and edit the features. Enable CIFS and keyctl.
 
===Host===
 
<nowiki>
nano /etc/modules-load.d/modules.conf
</nowiki>
 
 
<nowiki>
overlay
</nowiki>
 
 
<nowiki>
modprobe overlay
</nowiki>
 
With the container stopped, Go to Options and edit the features. Enable nesting and keyctl.
 
===Container===
 


Install
Install
Line 231: Line 251:
  sh get-docker.sh
  sh get-docker.sh
  </nowiki>
  </nowiki>
==Run NTP server in LXC==
==Run NTP server in LXC==
  <nowiki>lxc.cap.drop = sys_module mac_admin mac_override</nowiki>
  <nowiki>lxc.cap.drop = sys_module mac_admin mac_override</nowiki>

Revision as of 13:01, 22 February 2022

Proxmox-VE Main Page

Proxmox-VE

Mounts

Mounting a directory available to the host machine

In this example, Drivepool is a cifs share on another machine. The container we are adding this to is 230. The first location is on the host machine, and the second is where you want it located in the container.

The container should be shutdown/stopped before running this command.

pct set 230 -mp0 /mnt/pve/Drivepool,mp=/Drivepool

OR add to the config file located in

/etc/pve/lxc/
mp0: /mnt/pve/DrivePool,mp=/Drivepool

If your shared directory is present on all nodes add

,shared=1

Like this:

mp0: /mnt/pve/DrivePool,mp=/Drivepool,shared=1

Mounting a CIFS share in a container

With the container stopped, Go to Options and edit the features. Enable CIFS and nesting.

Start the container.

We're going to create some configuration files

mkdir /etc/autofs
nano /etc/autofs/DrivePool.conf
DrivePool -fstype=cifs,rw,credentials=/etc/autofs/DrivePool.creds,file_mode=0777,dir_mode=0777 ://IPADDRESS/DrivePool
nano /etc/autofs/DrivePool.creds
 domain=YOURDOMAIN
 username=YOURUSERNAME
 password=YOURPASSWORD
 

Install AutoFS

apt install -y cifs-utils autofs

Add Our Configuration

nano /etc/auto.master
/FS1 /etc/autofs/DrivePool.conf --ghost

Restart The Service

service autofs restart

Mount directories from other containers

Method 1 AutoFS + sshfs

With the container stopped, Go to Options and edit the features. Enable FUSE, and nesting.

Start the container.

We're going to create some configuration files

mkdir /etc/autofs
nano /etc/autofs/plex1.conf
bar -fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536 :sshfs\#sysop@10.0.12.90:/your/remote/path
apt-get install sshfs autofs

Now we setup passwordless auth to the other container

ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub sysop@10.0.12.90

It will prompt you for the password.

We'll create the mount directory

mkdir /mnt/sshfs

And add our configuration to autofs

nano /etc/auto.master
/mnt/sshfs --timeout=30,--ghost

Restart The Service

service autofs restart

Method 2 just sshfs

Mount directories from other containers

With the container stopped, Go to Options and edit the features. Enable FUSE, and nesting.

apt-get install sshfs
nano /home/sshfsmounts.sh
echo "yourpass" | sshfs youruser@10.0.15.31:/path/on/remote /mnt/pathyouwant -o password_stdin
crontab -e
@reboot /bin/bash /home/sshfsmounts.sh

Passthrough Physical Disk to Container

Passthrough Physical Disk to Container

Device Passthrough

GPU Passthrough

Install pve-headers, before and after you do any driver installs on host a below

apt install pve-headers

NOTES

This seems to need to be done sometimes after running updates on the host and rebooting.

Nvidia

On Host

Download Nvidia Driver for your card

chmod +x NVIDIA-Linux-x86_64-460.84.run
./NVIDIA-Linux-x86_64-460.84.run

Create Kernel rules

nano /etc/udev/rules.d/70-nvidia.rules
 KERNEL=="nvidia_modeset", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -m && /bin/chmod 666 /dev/nvidia-modeset*'"
 # Create /nvidia0, /dev/nvidia1 … and /nvidiactl when nvidia module is loaded
 KERNEL=="nvidia", RUN+="/bin/bash -c '/usr/bin/nvidia-smi -L && /bin/chmod 666 /dev/nvidia*'"
 # Create the CUDA node when nvidia_uvm CUDA module is loaded
 KERNEL=="nvidia_uvm", RUN+="/bin/bash -c '/usr/bin/nvidia-modprobe -c0 -u && /bin/chmod 0666 /dev/nvidia-uvm*'"
 

Do a reboot

reboot

Find the GPU device number. This usually is 226:0,128

ls -l /dev/dri
 crw-rw-rw- 1 root root 195, 0 Feb 11 18:11 /dev/nvidia0
 crw-rw-rw- 1 root root 195, 255 Feb 11 18:11 /dev/nvidiactl
 crw-rw-rw- 1 root root 195, 254 Feb 11 18:11 /dev/nvidia-modeset
 crw-rw-rw- 1 root root 236, 0 Feb 11 18:11 /dev/nvidia-uvm
 crw-rw-rw- 1 root root 236, 1 Feb 11 18:11 /dev/nvidia-uvm-tools
 

AND

ls -al /dev/nvidia*
 crw-rw-rw- 1 root root 195, 0 Feb 11 18:11 /dev/nvidia0
 crw-rw-rw- 1 root root 195, 255 Feb 11 18:11 /dev/nvidiactl
 crw-rw-rw- 1 root root 195, 254 Feb 11 18:11 /dev/nvidia-modeset
 crw-rw-rw- 1 root root 236, 0 Feb 11 18:11 /dev/nvidia-uvm
 crw-rw-rw- 1 root root 236, 1 Feb 11 18:11 /dev/nvidia-uvm-tools
 

Take note of the numbers in the fifth column above 195, 236 and 226 respectively.

ADD these lines to /etc/pve/lxc/<container-id>.conf

 lxc.cgroup2.devices.allow: c 195:0 rw
 lxc.cgroup2.devices.allow: c 195:255 rw
 lxc.cgroup2.devices.allow: c 195:254 rw
 lxc.cgroup2.devices.allow: c 236:0 rw
 lxc.cgroup2.devices.allow: c 236:1 rw
 lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
 lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
 lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
 lxc.mount.entry: /dev/nvidia-modeset dev/nvidia-modeset none bind,optional,create=file
 lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
 
 lxc.cgroup2.devices.allow: c 226:0 rwm
 lxc.cgroup2.devices.allow: c 226:128 rwm
 lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
 lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
 

In Container

Copy the drivers over from the host somehow.

chmod +x NVIDIA-Linux-x86_64-460.84.run
./NVIDIA-Linux-x86_64-460.84.run --no-kernel-module

AMD Radeon

On Host

Download AMD Driver for your card

Extract file, and navigate into the extracted contents.

./amdgpu-pro-install --no-32 --opencl=legacy,rocr -y
apt install amf-amdgpu-pro

Find the GPU device number. This usually is 226:0,128

ls -l /dev/dri

ADD these lines to /etc/pve/lxc/<container-id>.conf

 lxc.cgroup2.devices.allow: c 226:0 rwm
 lxc.cgroup2.devices.allow: c 226:128 rwm
 lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
 lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
 

In Container

Copy the drivers over from the host somehow.

./amdgpu-install --opencl=legacy --headless --no-dkms --no-32 -y
apt install amf-amdgpu-pro

Intel Quick Sync

I've not personally tested this, but:

"

All Intel CPU’s since Sandy Bridge released in 2011 have hardware acceleration for H.264 built in.

So if your CPU supports Quick Sync you can speed up transcoding and reduce load as well as energy consumption.

"

 lxc.cgroup2.devices.allow: c 226:0 rwm
 lxc.cgroup2.devices.allow: c 226:128 rwm
 lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
 lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
 

Net Passthrough

This is important if your container needs to be able to create a TUN device. This is useful for applications like openvpn.

ADD these lines to /etc/pve/lxc/<container-id>.conf

 lxc.cgroup2.devices.allow: c 10:200 rwm
 lxc.mount.entry: /dev/net dev/net none bind,create=dir
 

DVB Passthrough (TV Tuner Cards)

This is useful for applications like tvheadend, nextpvr, plex, emby, jellyfin.

ADD these lines to /etc/pve/lxc/<container-id>.conf

 lxc.cgroup2.devices.allow: c 212:* rwm
 lxc.mount.entry: /dev/dvb dev/dvb none bind,optional,create=dir
 

USB Passthrough

Find the applicable usb major and minor numbers via:

 #: lsusb
 Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
 Bus 001 Device 013: ID 045e:0800 Microsoft Corp.
 Bus 001 Device 020: ID 04a9:1746 Canon, Inc.
 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
 

ADD these lines to /etc/pve/lxc/<container-id>.conf

 lxc.cgroup2.devices.allow: c 189:* rwm
 lxc.mount.entry: /dev/bus/usb/001/020 dev/bus/usb/001/020 none bind,optional,create=file
 

Application Tricks for running in LXC

Docker

Host

 nano /etc/modules-load.d/modules.conf
 


 overlay
 


 modprobe overlay
 

With the container stopped, Go to Options and edit the features. Enable nesting and keyctl.

Container

Install

 curl -fsSL https://get.docker.com -o get-docker.sh
 sh get-docker.sh
 


Run NTP server in LXC

lxc.cap.drop = sys_module mac_admin mac_override

Run avahi in LXC

mkdir -p /etc/systemd/system/avahi-daemon.service.d

 
 cat <<EOF > /etc/systemd/system/avahi-daemon.service.d/override.conf
 [Service]
 ExecStart=
 ExecStart=/usr/sbin/avahi-daemon -s --no-rlimits
 EOF
 
 systemctl daemon-reload
 systemctl start avahi-daemon
 systemctl status avahi-daemon
 

Running Ubuntu Snaps in LXC

install squashfuse in the container

apt install squashfuse fuse

then install snap

apt install snap

LXC apache2 NAMESPACE fix

sed -i -e 's,PrivateTmp=true,PrivateTmp=false\nNoNewPrivileges=yes,g' /lib/systemd/system/apache2.service
systemctl daemon-reload
systemctl start apache2.service

Forward ALSA audio to LXC container

Host Machine

apt-get update
 apt-get install dkms
 
 apt-get install libasound2 alsa-utils alsa-oss
 
 root@pve:~# arecord -l
 **** List of CAPTURE Hardware Devices ****
 card 0: PCH [HDA Intel PCH], device 0: ALC283 Analog [ALC283 Analog]
   Subdevices: 0/1
   Subdevice #0: subdevice #0
 card 1: CameraB409241 [USB Camera-B4.09.24.1], device 0: USB Audio [USB Audio]
   Subdevices: 1/1
   Subdevice #0: subdevice #0
 
 root@pve:~# ls -la /dev/snd
 total 0
 drwxr-xr-x  4 root root      360 Jul 10 23:26 .
 drwxr-xr-x 24 root root     4300 Jul 10 23:26 ..
 drwxr-xr-x  2 root root       60 Jul 10 23:26 by-id
 drwxr-xr-x  2 root root       80 Jul 10 23:26 by-path
 crw-rw----  1 root audio 116,  2 Jul 11 08:50 controlC0
 crw-rw----  1 root audio 116, 12 Jul 11 08:50 controlC1
 ...
 
 lxc.cgroup.devices.allow = c 116:* rwm
 lxc.mount.entry = /dev/snd dev/snd none bind,create=dir 0 0
 

Remove apparmor from a container

There may be times that apparmor prevents a container from running properly (This is rare). If you trust the container, you can edit the configuration file to disable apparmor:

Configs are located in `/etc/pve/lxc`

add to the bottom of the config:

 lxc.apparmor.profile: unconfined
 

Reduce the size of a container

Containers can always be easily increased in size via the web panel. Reducing their size is a slightly more complicated procedure that must be done via CLI.

This process requires you to backup a container, and then restore it with the new size. Note: directories may be different for you.

 pct stop <id>
 vzdump <id> -storage local -compress lzo
 pct destroy <id>
 pct restore <id> /var/lib/lxc/vzdump-lxc-<id>-....tar.lzo --rootfs local:<newsize>
 pct fsck <id>
 

Restore a Container from storage

 pct restore <id> /var/lib/lxc/vzdump-lxc-<id>-....tar.lzo -storage local-zfs
 

Clone a container

 pct clone <id> <newid> --full --storage local-zfs --hostname newhostname