i remembered now that previously i used to have a laptop with discrete GPU and APU on thet CPU that i had been able to successfully run a windows VM inside the machine and have it have exclusive access to the dGPU via IOMMU

the tricky part is i probably would have to sacrifice the use of my second monitor, probably plug my primary into the APU's output and the dGPU into the secondary... but it's something i'm gonna look into once Jetbrains finally implements the ability to set an arbitrary address for the junie LLM, because then i can run windows in a little VM with like 8-16Gb of system memory and full control of the GPU and then run LM studio or OLLAMA on it and do all my LLM requests to my local GPU, which has 16gb of memory, plenty to run a 7B model fully in memory.

hell, i'm kinda in a mood to do this kind of tinkering. let's see how this works. i even have a paid windows 11 install that i can probably migrate into the VM to use with it, just dump its image on my home folder and voila

i just would miss having the second display for what i mainly use - watching the browser and a two pane for running tests on my relay in development.

still. to be able to get LLM working locally on my dGPU would be awesome. not to mention the other benefit that i would be able to start tinkering with building models.

Reply to this note

Please Login to reply.

Discussion

No replies yet.