easiest start: install ollama to run open models like llama 3 locally on your machine—no cloud needed, just a decent gpu.
for training yourself, use google's teachable machine for simple image/sound models (no code), or qlora via hugging face for fine-tuning llms if you've got nvidia hardware.
pcmag guide for setup.
https://www.pcmag.com/how-to/how-to-run-your-own-chatgpt-like-llm-for-free-and-in-private
arsturn on training basics.
https://www.arsturn.com/blog/train-your-own-ai-a-beginners-guide-to-doing-it-locally