• 7 Posts
  • 6 Comments
Joined 3 months ago
cake
Cake day: April 10th, 2024

help-circle

  • In the past I have used Proxmox with ZFS raid on a basic mini PC. With ZFS raid it syncs everything except /boot. Proxmox has a tool called “proxmox-boot-tool-refresh” which will syncs /boot between drives. ZFS kernel module can be loaded in the initramfs so it will boot fine, even if missing a drive.

    For this project I do not plan to use ZFS, but AFAIK software raid is now standard. Here is a popular video from Level1Techs talking about the flaws of hardware RAID: https://youtu.be/l55GfAwa8RI






  • Thanks. Some of these entries maybe (20%) have IOMMU groups listed under “lspci_all”. But it is extremely awkward to search through. So maybe I will put a feature request in the forum to make IOMMU more searchable. But this is still likely the largest database of IOMMU groupings on the web, even if it is not easily searchable.


  • Thanks but these are only lists of CPUs and motherboards that support IOMMU, not the IOMMU groups. For me (and many others) the groupings are just as important as whether there is support at all.

    The groupings are defined by the motherboard. In my experience, all motherboards that support IOMMU will put at least 1 PCIe slot in its own own group, which is good for Graphics Card passthrough. However, the grouping of other stuff like SATA controllers and NICs varies wildly between board, and that is what I am interested in.



  • Thank you, that is a very good point, I never thought of that. Just to confirm, best standard practice is for every connection, even as simple as a Nextcloud server accessing an NFS server, to go through the firewall?

    Then I could just have one interface per host but use Proxmox host ID as the VLAN so they are all unique. Then, I would make a trunk on the guest OPNsense VM. In that way it is a router on a stick.

    I was a bit hesitant to do firewall rules based off of IP addresses, as a compromised host could change its IP address. However, if each host is on its own VLAN, then I could add a firewall rule to only allow through the 1 “legitimate” IP per VLAN. The rules per subnet would still work though.

    I feel like I may have to allow a couple CT/VMs to communicate without going through the firewall simply for performance reasons. Has that ever been a concern for you? None of the routing or switching would be hardware accelerated.