This would be local, and I'd rather it deal with metadata, transcripts, PDFs, etc rather than an entire hard drive of files. Essentially a massive amount of text from various sources collected into one location.

And I'm aware no single database is best for every use case, I'm looking for one that would work for large amounts and LLM integration (or how best that would look), that's why im on the hunt for options. Any ideas or trade offs you can offer are appreciated.

Reply to this note

Please Login to reply.

Discussion

If you just want your LLMs to read the content then something like MongoDB would work well since itโ€™s a document store. You can add your own metadata.

If you want to query against the data without iterating through the documents each time then you might want to tokenize your content and store in a vector database like Qdrant or similar.

Yeah that's more what i had in mind. Vectorizing alongside the actual database of files to recall it later and just have pointers to it. I'll look into Mongo + Qdrant and start playing around, thanks ๐Ÿ™

take a look at Neo4j.. they have a tutorial which explains and explores what makes graphs useful.