It can run some smaller llms even with an integrated GPU or purely on CPU, something like llama:3.2:3b its also useful and you'll get a feel for things before you can run the larger models.
That is what I do on my laptop!
Please Login to reply.
No replies yet.