Compile llama.cpp in Termux, use wget to grab the latest LLaMA 2 7B Q4 model from Huggingface, run command.

Reply to this note

Please Login to reply.

Discussion

No replies yet.