nostr:npub1jvrgz7wf9fwftcqppnpyjplltlkcuwghc0pqf9wv3x8ds5zq5t4qmh8tkt That’s an interesting use case/solution. Have you noticed any significant performance penalties by virtualizing NAS software?

Reply to this note

Please Login to reply.

Discussion

nostr:npub122dwd89pdvmk2273fc7w8zdhva2y0hhzjlmla55z2gl5x037w4msexa54u If there is I'm not aware of it, but I have not benchmarked it. I do have a write cache disk in the pool and need a read cache disk as well. I also have a PERC card on order that I'll be putting the drives on and giving that card to the VM directly, rather than through the host SATA controller which is also being used by proxmox.

After talking with nostr:npub1fy0nvfj5gpn5wqcnmfjnurx3wpnaqdah0j75dhmnrv3m5qvf805skpmmfu we (mostly his help <3) came to the conclusion that for my needs, a TrueNAS VM atop Proxmox was a better option than running a VM inside TrueNAS Core that may or may not support hardware hand-off as well as Linux, and running either proxmox in that, or manual LXC containers (or docker).

It's possible I could've used Ceph to handle this facet for me to create mountable storage pools for LXC, but the docs primarily described it for distributed hyperconvergence rather than like, a single home server like me with a shared storage pool for apps instead of block storage.