It can run some smaller llms even with an integrated GPU or purely on CPU, something like llama:3.2:3b its also useful and you'll get a feel for things before you can run the larger models.

Reply to this note

Please Login to reply.

Discussion

That is what I do on my laptop!