if an llm takes a long time to respond, does that mean it was a really good question? #asknostr
Discussion
more to do with the algorithm and the range of possible answers.
models like DeekSeek deep thinking will do a lot of "self reflecting" before answering and if connected to internet the amount of data to correlate before answering increases further...
now consider the most used ones (ironically the ones people should avoid like ChatGTP and the likes from the usual suspects due to privacy concerns) have a huge amount of users, IMO, those giant techs can take the traffic but some thinks bottle necks are plausible... I think this to be true for the little guy like DeepSeek but not ChatGTP
interesting. what's the best free llm to use? i want it to remember my chats but also value privacy.
hmmm
One or the other, if you want it to remember you and you want
best performance you need one that connects to internet and fetch data to correlate
Options: DeepSeek, Grok
Workaround:
Use a mobile (Android) with an always on VPN (Mullvad for instance, paid with LN Bitcoin)
Use a new email address only for the LLM when you register and never use it for anything else. Grok is web based DeepSeek has app and web based choices. If on desktop same, VPN, and dedicated emails for LLM accounts.
Caveat:
not perfect, everyone makes mistakes, failing to have the VPN on will get your IP address to the LLM provider and linked to your email address and mobile operator, if using WiFi, to your ISP provider.
Full privacy no memory high performance:
Don't get any account and use DuckDuck AI... you could try duckduck go browser and create an account and trust their BS that they don't sell y our data, I don't believe it.
Full Privacy, less efficient:
for that you will need to run your own with LM Studio and load the models you can download from (huggingface)[https://huggingface.co/docs/hub/en/models-downloading], I would suggest DeekSeek latest and greatest but you can browse around. If you are tech savvy you can work on your owns based on your favorite ones, that is what I do... You can remove a lot that is not needed.
this option is not that high performance because of the no internet access , therefore is very helpful unless you need something post Jan 2024, bear in mind the models are base on old data, most dating back to early 2023 as their data cut date... That is why you have Leo, Grok and DeepSeek all connected to internet to fill in the blanks and add value with the current data set.
hope it helps
this is so helpful, thanks!