Calling LLM tools via on-device Llama 3.2 to work with the filesystem of my desktop running a Pylon MCP server: with no inference costs, ~no latency and no data leaving my home network 💪🤖

Reply to this note

Please Login to reply.

Discussion

No replies yet.