Instead of just bitching at you, here are some tips for getting code out of your AI that even a beginner might be able to read and understand, and that might not be complete garbage:

Include in your prompt a file of instructions like:

1. Do not make any function longer than 5 lines of code, excluding documentation and logging. Limit all files to 250 lines of code.

2. Make the code highly object-oriented (or highly functional, whichever you prefer or depending upon your use case) and modular.

3. Include a logger, integrate it into every function, and make it so that I can run the app with different logging levels.

4. Write extensive user documentation for everything, including type definitions and examples for implementation.

5. Use constants wherever applicable, and group the constants at the top of the class or module.

6. Use private or protected wherever useful.

7. Refactor and simply the code, and remove any dead or redundant code, after each pass.

8. Separate concerns, and give all functions a human-readable name.

9. Add a description of the architecture, patterns, protocols (like Nostr), and libraries you expect to be preferred.

10. If implementing a GUI or other user interface, point it to an example of one you like and tell it to give you something similar.

Then submit your request in Gherkin scenarios, instead of prose.

Reorganize the scenarios into features -- which should be relatively small, like "broadcast button" -- and get it all sort of comprehensive and coherent, and then feed the AI one feature at a time, along with the prompt instruction file.

And then work that one feature until it's good, manually test it and refactor the code, AND THEN COMMIT THE CODE TO GIT, before asking it to do the next feature. Start a new conversation before the next feature. This is like high-speed agile development and will ALWAYS get a better result because it's iterative and therefore more like how the best human developers work.

If you ask it to fix some code, open a different chat and tell it that another AI wrote the code. Even better, ask another AI and tell it the name of the other one. Just trust me on this one.

Once you have it doing what you want it to do, and the state can be fixed and tagged, have it write unit tests to freeze the functionality. Then make sure the unit tests are easy to read and understand. Remove any test you don't understand. Avoid mocking relays. Use a local or ephemeral relay for testing, instead.

And then write your own integration test, with some realistic examples.

Reply to this note

Please Login to reply.

Discussion

Nevermind. I just realized that real beginners are probably trying to read that and are like

I guess that's the thing with AI. The more knowledge you use to front-load the prompt and review/improve the returns, the more you get out of it and the better the result is.

We could also have a Liminal agent, that talks you to death and then suggests something really brilliant, so that you almost miss it because your mind is exhausted from the long preamble.

And a Michael agent that responds to every question with something like "This coulda been a singelton." or "Have you considered configuring this to be reusable?"

And a nostr:nprofile1qy2hwumn8ghj7etyv4hzumn0wd68ytnvv9hxgqguwaehxw309ankjarrd96xzer9dshxummnw3erztnrdakj7qpqecdlntvjzexlyfale2egzvvncc8tgqsaxkl5hw7xlgjv2cxs705s4t4uaf agent, that usually ignores you and then occasionally drops something complex, and then disappears again.

A nostr:nprofile1qyghwumn8ghj7mn0wd68ytnvv9hxgtcpzemhxue69uhhyetpd3ujumtvv44h2tnyv4mz7qpqfjqqy4a93z5zsjwsfxqhc2764kvykfdyttvldkkkdera8dr78vhsfa5ckl agent would yell at you for a solid 10 minutes, calm down, and then program, reprogram, and refactor and reprogram in record time, and then drop the result in your feed and be like, "I suffer you idiots all day."

nostr:nprofile1qy2hwumn8ghj76rfwd6zumn0wd68ytnvv9hxgqghwaehxw309ahxverz9ehx7umhdpjhyefwvdhk6qpq2262qa4uhw7u8gdwlgmntqtv7aye8vdcmvszkqwgs0zchel6mz7s4pc3yf agent would seem to never be around or responsive, but then suddenly drop a "Check you messages." in the chat feed and deliver some major feature you asked for 2 months ago and had given up all hope for.

And so on. Could be really fun. 😅

I feel very flattered

Ngl, watching you talk to yourself can be very funny.

I laugh at my own thoughts, too, like a crazy person.

But it’s fun isn’t it ?

It's certainly more exciting than engaging in a discussion with a bunch of ignorant max-normies.

Sometimes you have to talk with the realest person you know: yourself.

or an AI that is pandering as they always do

Yep. Although... I don't even think it's appropriate to call them "AI"s in their current state. Unless I'm missing something🤷. The whole AI buzz just seems like another bs scare tactic, so it would seem.

They're LLMs, which are chatbots. It's the least-interesting, but most-widely applicable form of AI.

Don't get me wrong, it's smart in some ways. But, it's just too damn easy to force it to present misinformation in its answers🤷.

It's like I'm writing a comedy sketch in my head and playing all the roles.

at least they are funny instead of retarded like AI jibber jabber hallucinations

Not only that, the more others load knowledge like these into the hive mind, the better the code it generates in the next iteration.

You put the engineer into prompt engineering!

Yeah. 🤣 I was trying to think of how I'd make the resulting code really easy to read, but that puts all the complexity into the code-generation process. Woops.

Note to self. Things to avoid:

1. Pessimism

2. Nihilism

3. Getting interviewed by Silberengel when you don’t know jack.

I actually tend to be reaaaallly nice to newbies, but I just give them something mid-level difficult to work on and they disappear, if they aren't up to snuff.

But you make me nervous and I haven’t even applied for a job🤣

Sorry, missed the obvious point that you can have it automate your Gherkin, first, to give you a set of acceptance tests. Then your feature is done when the tests are GREEN.

Like duh. All the facepalms, Silberengel.

YES! TDD becomes dramatically easier with LLMs to write the tests.

i would just add to point 2 - or if your language has interfaces, make use of them to improve modularity

in go, interfaces > functional

in java, interfaces > objects

the reason why i am a #golang maxi is because no other language has coroutines, functional and interfaces, and no objects

name one other language that describes. that's right, because there isn't one.

Oh yeah, interfaces. Good point.

I've started experimenting with an iterative LLM driven workflow in my day job. Yesterday and today I made serious headway on a user story, and it went like this:

1. Copy user story details into a Markdown file, write a prompt around them, and include a sketch of the process by which I'd go about implementing the user story.

2. Give the LLM a context file describing unit testing best practices based on our test engineer's extensive notes.

3. From the user story prompt and the unit test prompt, tell the LLM to write a unit test plan into a third file.

4. Distill this third file down to essential test cases.

5. Prompt the LLM to write the unit tests for those test cases.

6. Refine the unit test code as needed.

7. Tell the LLM to write an implementation plan for the changes required by the user story.

8. Edit the implementation plan down to the next essential steps.

9. Prompt the LLM to write code according to its implementation plan, then run it against unit tests until all the tests pass.

10. Profit.

The core workflow is a cycle: define -> plan -> code -> repeat.

LLMs are best at natural language processing, so we can capitalize on that by doing a bunch of planning in natural language before setting them to write code.