Anyone have experience running their own AI agents locally without having to be in the command line the whole time? What tools do you use?
I'd love to run and train something on my own machine or on a home server. Thoughts/suggestions?
Anyone have experience running their own AI agents locally without having to be in the command line the whole time? What tools do you use?
I'd love to run and train something on my own machine or on a home server. Thoughts/suggestions?
For coding, vs code with the cline extension.
I use ollama and webUI in conjunction with one another to have a web based local AI that’s resembles ChatGPT
I haven’t found a good way to train a model, I referencing docs is fairly easy, but from my understanding you would need access to the same amount of compute power that was used to train the original model; in order to update its weights and train your own model
The only legitimate person capable of having any weight on this convo is nostr:nprofile1qqstnem9g6aqv3tw6vqaneftcj06frns56lj9q470gdww228vysz8hqpzdmhxue69uhkzmr8duh82arcduhx7mn9qy2hwumn8ghj7etyv4hzumn0wd68ytnvv9hxgqgdwaehxw309ahx7uewd3hkcam28zl
Deepseek
Training a model means defining your own parameters based on data you supply. As such that would require a massive dataset and also 100s of H100 Nvidia GPUs to churn that data into model parameters.
This is a simple video showing someone using the latest RTX 5090 to run an existing model and make some AI content at home.
https://cdn.satellite.earth/9d7212700bbeda25189be6f2eab27604e111613b4fdf8f90df72438b446da301.mkv
Maybe he meant specializing a model, instead of doing a full training from scratch.
H2O LLM Studio. No-code GUI designed for fine-tuning large language models
Goose