unit tests, sure, integration tests are a lot more fiddly. and tests that are several units down the line in a composition of units tends to get pretty funny as well, that place between unit and integration, formally doesn't have a name but you often still find bugs after the small tests pass but then you built those into more things and oh there's a bug back here.

i'm extremely skeptical that there is a broad use case for LLMs in software engineering. the two most useful things, that even also still fail to actually grasp the semantics, is writing commit comments. even there i find it doesn't work more than 2 times out of three.

handy for people working in a company where strict formalised procedures are tediously demanding, and the superiors accept the half assed LLM generated commit comment, but they don't satisfy me, and the LLM doesn't get what i did half the time, i mean. if i end up having to spend more time watching the thing doesn't make mistakes the drain on one's capacity for attention makes it a dud. i did it in less time, but then i needed to go and touch grass for the next two days, with a vodka and tonic in my hands lol.

Reply to this note

Please Login to reply.

Discussion

Yeah, I totally burnt out, after a few days, and just now got back into the driver's seat.

Felt like my brain got fried.

This gets into different schools of unit test development.

Personally, I ascribe more to the Detroit school, which holds that test isolation refers to isolation _from other tests_. I try to write tests that treat the system under test like a black box. All I care about are the inputs and outputs of the system's public API. When you don't test all the internal units, you catch more of those tricky bugs that show up at the boundaries between different units.

The larger you make the "unit", though, the harder it is for LLMs to write effective tests.