Now that's a statement i can't relate more to šŸ˜…

I first encountered LDA from a previous project -> "find k topics in n documents" sounds great until you realize you need to be pretty confident in how many topics there are. Then, not wanting to be sure if i can impose that assumption, i go to heirarchical LDA, but in both cases i just didn't know how to interpret the clusters.

embedding models, being primarily trained on natural language (bert, gpt) just need to associate coocurring words together. You get some high dimensional vector and the idea is if concepts are similar, they'll be close to each other in this vector space. So you can cluster spatially without needing to assume how many there are šŸ™‚

Reply to this note

Please Login to reply.

Discussion

makes sense! thanks for elaborating

what do you think about calculating a coherence model from the lda and using the scores to tune n of topics?