Venice.ai and select the llama3.1 model. Great option for a big model that you can’t run locally.
Otherwise a local llama3.1 20B is solid if you have the RAM
Very cool, thanks for that recommendation!
Please Login to reply.
No replies yet.