Venice.ai and select the llama3.1 model. Great option for a big model that you can’t run locally.
Otherwise a local llama3.1 20B is solid if you have the RAM
Very helpful! Investigating both for work where we have great hardware and for home so this is great to know
Please Login to reply.
No replies yet.