That’s cool. The issue I have with AI is the need for validating its output. For all we know it could be 99% correct and get 1% of it wrong, and it’s easy to miss. But then humans could get things wrong too, so I don’t know.
Discussion
That's why I think validation should have some human intervention, for example when ai writes code it should write its own tests too but the test data should be provided/validated by a human.
It seems to be that way, it did get a few things wrong but it seems like it generated a decent overview