Skip to content

The following page is how I resolved a TASK ERROR: Device /dev/dri/card0 does not exist error when starting a Plex LXC created with Proxmox Community Scripts.  This issue occurred after installing a Realtek PCI 2.5GB network adapter in the wireless M.2 slot of a Dell Optiplex Micro 7040, which changed the system’s device ordering.  While this focuses on a specific hardware change, the approach could be adapted to other scenarios where device renumbering causes similar errors.

The solution was to create a udev rule that generates a persistent symlink for the integrated GPU.  This symlink will have a unique, consistent name that won’t change even if the hardware configuration is updated.  Once the symlink is created, we’ll update the Plex LXC’s hardware configuration to use the symlink instead of relying on the default device numbering.

  1. ssh into the Proxmox host

  2. Check the Direct Rendering Manager (DRM) mappings by using the following command to confirm the integrated GPU is being mapped to card1.

    ls -l /dev/dri/by-path
    
      You should get something similar to this:

    pci-0000:00:02.0-card -> ../card1
    pci-0000:00:02.0-render -> ../renderD128
    
  3. Use the following udevadm command to get device attributes and their values ATTRS{device} and ATTRS{subsystem_device}.  These will be used later in our rule.

    udevadm info -a -p $(udevadm info -q path -n /dev/dri/card1) | grep device
    

    You should see something similar to this:

    looking at device '/devices/pci0000:00/0000:00:02.0/drm/card1':
    looking at parent device '/devices/pci0000:00/0000:00:02.0':
    ATTRS{device}=="0x3e92"
    ATTRS{subsystem_device}=="0x085a"
    looking at parent device '/devices/pci0000:00':
    

    Copy down the lines containing the ATTRS{device} and ATTRS{subsystem_device}.

  4. Create and edit a new udev rule.  Replace 99 if you already have a rule with that index.

    nano /etc/udev/rules.d/99-gpu-drm.rules
    
  5. Add the following line, replacing the ATTRS{device}=="0x3e92" and ATTRS{subsystem_device}=="0x085a" with the values you copied in step 2.

    KERNEL=="card*", SUBSYSTEM=="drm", ATTRS{device}=="0x3e92", ATTRS{subsystem_device}=="0x085a", SYMLINK+="dri/intel-gpu"
    
      KERNEL=="card*" -- Matches only devices starting with "card", with a wildcard to catch anything afterward
    SUBSYSTEM=="drm" -- Restricts to DRM devices only. Avoids accidentally matching with other PCI devices with similar IDs
    ATTRS{device}=="0x3e92" -- Targets the device with the following device id
    ATTRS{subsystem_device}=="0x085a" -- Used to filter down further to avoid catching a device with the same id
    SYMLINK+="dri/intel-gpu" -- Create a symlink to under the dev /dev/ directory

  6. Save and close the udev file rule

  7. Reload udev rules without needing to reboot with the following commands:

    udevadm control --reload-rules && udevadm trigger
    
  8. Confirm symlink creation with the following command:

    ls -l /dev/dri/intel-gpu
    
      You should see something similar:

    /dev/dri/intel-gpu -> card1
    

Next Steps

  • Update the Plex LXC's device mapping to use the /dev/dri/intel-gpu instead of card0.
  • Use Ansible to configure the DRM symlink across all Proxmox hosts to avoid having to update the LXC manually every time you migrate the LXC. Ideally, in this scenario, all proxmox hosts have identical hardware. Otherwise, you'll need to create unique mappings for each host on which the LXC will be hosted. See Ansible Playbook section below...

Ansible playbook reference

The following is an Ansible playbook that configures all hosts in the pvecluster group with GPU mapping.  You can copy and modify as needed.

---
- name: Create persistent gpu symlink on proxmox hosts
  hosts: pvecluster
  become: true
  gather_facts: false

  vars:
    udev_rule_file: "/etc/udev/rules.d/99-gpu-drm.rules"
    symlink_name: "/dev/dri/intel-gpu"

  tasks:
    - name: Check drm by-path mappings
      ansible.builtin.shell: ls -l /dev/dri/by-path | grep 'card'
      register: drm_mappings
      changed_when: false

    - name: Debug drm mappings
      ansible.builtin.debug:
        msg: "{{ drm_mappings.stdout_lines }}"

    - name: Get ATTRS{device} and ATTRS{subsystem_device} values
      ansible.builtin.shell: |
        udevadm info -a -p $(udevadm info -q path -n /dev/dri/card*) | grep -E 'ATTRS\{device\}|ATTRS\{subsystem_device\}' | head -n 2
      register: udev_attrs
      changed_when: false
      failed_when: udev_attrs.rc != 0

    - name: Parse device and subsystem_device from udevadm output
      ansible.builtin.set_fact:
        gpu_device: "{{ udev_attrs.stdout_lines[0].split('==')[1] | trim('\"') }}"
        gpu_subsystem_device: "{{ udev_attrs.stdout_lines[1].split('==')[1] | trim('\"') }}"

    - name: Debug parsed gpu attributes
      ansible.builtin.debug:
        msg: "device={{ gpu_device }}, subsystem_device={{ gpu_subsystem_device }}"

    - name: Create udev rule for Intel gpu symlink
      ansible.builtin.copy:
        dest: "{{ udev_rule_file }}"
        content: |
          KERNEL=="card*", SUBSYSTEM=="drm", ATTRS{device}=="{{ gpu_device }}", ATTRS{subsystem_device}=="{{ gpu_subsystem_device }}", SYMLINK+="dri/intel-gpu"
        owner: root
        group: root
        mode: '0644'

    - name: Reload udev rules
      ansible.builtin.shell: |
        udevadm control --reload-rules && udevadm trigger
      changed_when: false

    - name: Verify gpu symlink creation
      ansible.builtin.stat:
        path: "{{ gpu_symlink }}"
      register: gpu_symlink

    - name: Fail if gpu symlink not created
      ansible.builtin.fail:
        msg: "GPU symlink {{ gpu_symlink }} was not created!"
      when: not gpu_symlink.stat.exists

    - name: Confirm gpu symlink creation
      ansible.builtin.debug:
        msg: "GPU symlink {{ gpu_symlink }} created successfully -> {{ gpu_symlink.stat.lnk_source }}"

grafts: