Thank you for the response and insight. I’ll try out some 13b models today.
I noticed there are some ways to hook llama2/codellama into vs studio. I think using “continue” was one of them.
I’d like to do that and have it evaluate some nostr protocol code.
Following you for more llm discussion 🤙