If that’s the only one that you have messed around with and maybe that could be part of the overpromising vibe you’ve gotten. Llama2-7B is quite a bit less capable than the full llama2 model. And that model is quite a bit less capable than ChatGPT4. Token generation on your embassy is probably a lot slower than what I’m used to using open AI servers.. though self hosted is awesome. I want to get some hardware to do that myself someday.

Reply to this note

Please Login to reply.

Discussion

I've seen other people's results from the newest before I even touched one myself. Like I said, Im not shitting in it. Only shitting in the blown way out of proportion claims about em. There's nothing magical (for lack of a better term) about em. They do what they do, but there is nothing that can reasonably or rationably be called intelligent about em. When one answers a long sought after question that no human has been able to, then maybe I'll change my tune, but as it currently stands, its just searching data and returning the most likely desired results.

Oh as for the self hosting, embassy can run on any pc with the proper specs or RasPi so if ya got an old pc or laptop laying around, it may work