yeah, pretty much the main thing i'm interested is:

1. training my own models, so i can train it on a data set that i think is reasonable (probably nostr events and programming documentation, and philosophy/religious/spiritual texts)

2. using coding agents, which requires a model that has a lot of computer science trained into it, so the models i make in 1. will be good for this.

what i would do first, is get the agent i now have to write a feeder script, that would pull a cache of all of the kinds of data i want, from archive.org, from gutenberg project, and a shitload of golang github projects, focusing on religion, philosophy, mathematics, cryptography and computer science, and then feed that into a training process to build a model that will be an expert in all these subjects, and perfect for doing coding agent stuff, because it would be go native, most of the agents barely have a grasp of Go. i want the LLM to know how to write correct fully compliant and idiomatic Go without any of the stupid mistakes. the agent i have now it is always forgetting to add or remove imports and make sure every symbol is being used as these are errors.

and then i would probably work on writing an agent that i enable to rewrite files in a prescribed directory and would be able to get it to write and debug code, by designing a scheme for taking a question and turning it into a plan for solving it, and updating the plan as the information changes the required steps

it's not a hard core bucket list thing for me but i can see myself having a dedicated rig for doing LLM work in the near future.

Reply to this note

Please Login to reply.

Discussion

No replies yet.