Oh, I actually can't test that - don't have an adequate device or MCU to turn into that sadly.
Well, my first thought was to dump the core documentation into the context (which means I will need to use a 128k ctx model on my maschine) and then first have it summarize and derive the core principles. From there, using those, I would have it write the core functions of deriving the users. And lastly, to write a CLI around the interface it generated last.
This allows me to keep relevant information in context, allowing the LLM to "forget" stuff as it gets bumped out of the context window, and iteratively approach this.
Not exactly the vibe-coding way - but for that you'd use some cloud provider, which I do not use, nor have a subscrpiton to. Just me and my 4090 baby. :D