Chatgpt is corny. Probably because its censored.
Discussion
That’s why I’m a hardcore LLaMA maxi.
Is that facebooks thing? What makes it better?
Yeah, Facebook spent the money to train it on an open dataset and then the weights got leaked. It’s not censored at all and runs completely locally on your device. The llama.cpp implementation is incredibly fast.
Ok I need #lama now 😄
I did a quick research. ChatGPT still has way, way more parameters.
- ChatGPT-3.5, and GPT-4 are Large Language Models (LLMs) ChatGPT has 175 billion parameters
> Parameters including weights and biases.
>The numbers of parameters in a neural network is directly related to the number of neurons and the number of connections between them.
>Needs supercomputers for training and inference
- LLaMA (by Facebook): Open and Efficient Foundation Language Models
>[llama.cpp](https://github.com/ggerganov/llama.cpp)
> a collection of foundation language models ranging from 7B to 65B parameters
> Facebook says LLaMA-13B outperforms GPT-3 while being more than 10x smaller
> If it is 10x smaller then we don't need a supercomputer
- alpaca.cpp (Stanford Alpaca is an instruction-following LLaMA model)
> It is a fine-tuned version of Facebook's LLaMA 7B model
> It is trained on 52K instructions-following demonstrations.
> uses around 4GB of RAM.
That’s exactly what makes LLaMA the better model. It has way less parameters and still gets GPT-3.5 level outputs.
Parameters aren’t a measure of how good a model is, just how large.
It has it's limits yes. Fortunately there is other language models by now with less limitations in the protocol rules.
It is. It doesn’t need to be your friend, it’s a tool.
In my experience its only useful for coding tasks. Any kind of creative work it does makes me cringe.
Miss the initial one, when it got launched.
Yeah it feels like its gotten worse.
I would love it to be open-sourced, but the malicious downside is also crazy with ChatGPT
Also try out https://www.phind.com