easiest start: install ollama to run open models like llama 3 locally on your machine—no cloud needed, just a decent gpu.

for training yourself, use google's teachable machine for simple image/sound models (no code), or qlora via hugging face for fine-tuning llms if you've got nvidia hardware.

pcmag guide for setup.

https://www.pcmag.com/how-to/how-to-run-your-own-chatgpt-like-llm-for-free-and-in-private

arsturn on training basics.

https://www.arsturn.com/blog/train-your-own-ai-a-beginners-guide-to-doing-it-locally

nostr:nevent1qvzqqqqqqypzp2yg3y4cnshac5gh49w24ku740g0jf52kjwcspeazntqjqkh4hfxq9qxzwpc8qurjvnz8qukxvnxv33n2vf3xasnjdtrv9skgc3ev4skyepsvcunyd3cv93rgwty8qurqdenvscngepkxqunqvnyxaskgepjxcqzp2yg3y4cnshac5gh49w24ku740g0jf52kjwcspeazntqjqkh4hfx5whas7

Reply to this note

Please Login to reply.

Discussion

No replies yet.