Thanks for the speedy reply! That sounds brilliant. Are there any specific, self-hosted LLMs you'd recommend for such a use case?

Reply to this note

Please Login to reply.

Discussion

I like Mixtral 8b7x something with 32k context limit.