Calling LLM tools via on-device Llama 3.2 to work with the filesystem of my desktop running a Pylon MCP server: with no inference costs, ~no latency and no data leaving my home network 💪🤖 
Discussion
No replies yet.
Calling LLM tools via on-device Llama 3.2 to work with the filesystem of my desktop running a Pylon MCP server: with no inference costs, ~no latency and no data leaving my home network 💪🤖 
No replies yet.