in this case I was exploring a corpus of data, so my thinking was LDA would be a good solution for discovering latent topics. Iām not a data scientist by trade though so just winging it š
Discussion
Now that's a statement i can't relate more to š
I first encountered LDA from a previous project -> "find k topics in n documents" sounds great until you realize you need to be pretty confident in how many topics there are. Then, not wanting to be sure if i can impose that assumption, i go to heirarchical LDA, but in both cases i just didn't know how to interpret the clusters.
embedding models, being primarily trained on natural language (bert, gpt) just need to associate coocurring words together. You get some high dimensional vector and the idea is if concepts are similar, they'll be close to each other in this vector space. So you can cluster spatially without needing to assume how many there are š