The model’s performance on summarization is going to depend most of all on how popular the book is. Remember, in this context you’re relying on the model’s memorization of the book, which only really works if there’s a lot of stuff written about it. This also means the book can’t be new.

TLDR: Just use a long context model like Gemini and put the whole book into your prompt when asking it to summarize instead of trusting the LLM to remember.

Reply to this note

Please Login to reply.

Discussion

I did it with very popular books.

I did find that Gemini was not making the summarization errors that ChatGPT was making.

If the books are super popular it might be ok, but again not super trustworthy. It’s like asking a friend who read the book years ago to try and remember it for you. Gemini might be a bit better if it has web search enabled by default.

Just pasting the whole book into the prompt is going to be much more like asking a friend to give you a summary who is currently holding the book and can instantaneously read the whole thing.