Replying to Avatar Silberengel

Instead of just bitching at you, here are some tips for getting code out of your AI that even a beginner might be able to read and understand, and that might not be complete garbage:

Include in your prompt a file of instructions like:

1. Do not make any function longer than 5 lines of code, excluding documentation and logging. Limit all files to 250 lines of code.

2. Make the code highly object-oriented (or highly functional, whichever you prefer or depending upon your use case) and modular.

3. Include a logger, integrate it into every function, and make it so that I can run the app with different logging levels.

4. Write extensive user documentation for everything, including type definitions and examples for implementation.

5. Use constants wherever applicable, and group the constants at the top of the class or module.

6. Use private or protected wherever useful.

7. Refactor and simply the code, and remove any dead or redundant code, after each pass.

8. Separate concerns, and give all functions a human-readable name.

9. Add a description of the architecture, patterns, protocols (like Nostr), and libraries you expect to be preferred.

10. If implementing a GUI or other user interface, point it to an example of one you like and tell it to give you something similar.

Then submit your request in Gherkin scenarios, instead of prose.

Reorganize the scenarios into features -- which should be relatively small, like "broadcast button" -- and get it all sort of comprehensive and coherent, and then feed the AI one feature at a time, along with the prompt instruction file.

And then work that one feature until it's good, manually test it and refactor the code, AND THEN COMMIT THE CODE TO GIT, before asking it to do the next feature. Start a new conversation before the next feature. This is like high-speed agile development and will ALWAYS get a better result because it's iterative and therefore more like how the best human developers work.

If you ask it to fix some code, open a different chat and tell it that another AI wrote the code. Even better, ask another AI and tell it the name of the other one. Just trust me on this one.

Once you have it doing what you want it to do, and the state can be fixed and tagged, have it write unit tests to freeze the functionality. Then make sure the unit tests are easy to read and understand. Remove any test you don't understand. Avoid mocking relays. Use a local or ephemeral relay for testing, instead.

And then write your own integration test, with some realistic examples.

I've started experimenting with an iterative LLM driven workflow in my day job. Yesterday and today I made serious headway on a user story, and it went like this:

1. Copy user story details into a Markdown file, write a prompt around them, and include a sketch of the process by which I'd go about implementing the user story.

2. Give the LLM a context file describing unit testing best practices based on our test engineer's extensive notes.

3. From the user story prompt and the unit test prompt, tell the LLM to write a unit test plan into a third file.

4. Distill this third file down to essential test cases.

5. Prompt the LLM to write the unit tests for those test cases.

6. Refine the unit test code as needed.

7. Tell the LLM to write an implementation plan for the changes required by the user story.

8. Edit the implementation plan down to the next essential steps.

9. Prompt the LLM to write code according to its implementation plan, then run it against unit tests until all the tests pass.

10. Profit.

The core workflow is a cycle: define -> plan -> code -> repeat.

LLMs are best at natural language processing, so we can capitalize on that by doing a bunch of planning in natural language before setting them to write code.

Reply to this note

Please Login to reply.

Discussion

No replies yet.