The token limit is the thing that really constrains GPT4. It can’t handle a very complex thing in a lot of detail.
I think of it like this, GPT4 can…
Can handle 100% of scope at 0.100 fidelity
Can handle 20% of scope at 0.250 fidelity
Can handle 4% of scope at 0.500 fidelity
Can handle 1% of scope at 0.750 fidelity
Can handle 0.1% of scope at 0.950 fidelity
Figures just example, but each mission is different and non linear.
Complexity = f(scope, fidelity)
Token limit is akin to a complexity limit.
You have to create a structure that looks at top level scope and breaks it down into a number of next level down workpacks (of smaller scope). I do this by trying to keep the complexity about the same, but reducing the scope and trying to advance the fidelity.
If I ever get all this working, I’ll probably publish it.
The QA process is quite different.