There seems to be at least some people on Nostr that are interested in Large Language Models.

What does everyone use right now? ChatGPT4? Llama2? Bard?

I'll go first. My top model from a reasoning standpoint is Platypus 70B which does a really good job with complex reasoning.

I've also been really liking CodeLlama 34B lately, although not as good with complex problems.

#AI #LLM #ChatGPT

Reply to this note

Please Login to reply.

Discussion

I use ChatGPT

Unfortunately it looks like with Code 🦙 you have to download it 🙅🏻‍♀️

ChatGPT is the benchmark that everyone is chasing and for good reason!

And yes, the 🦙 models are meant to be run locally on your own computer, but requires a powerful machine. You can checkout huggingface.co/spaces and search for a model to try it out for free (although there might be queue).

Has anybody found a cloud service provider that rents GPUs and accepts bitcoin?

I like the idea of installing it myself but I don't have the local power to use the more advanced models. So GPU rental seems the way to go.

Obviously I prefer to use Bitcoin for many reasons but one is for privacy. I could probably try using a prepaid credit card but I also know many providers refuse those. Or they may be usable only in some countries.

Lunanode is well known for accepting bitcoin but they don't offer GPU rentals.

The closest I've come is bitlaunch.io for a vultr.com instance (they do GPUs) but we can't freeze/shelve the instance between uses. Neither can we take a snapshot between uses to faciliate the next instance creation. I guess there's the option of leaving the instance on, but that would be in the thousands per month...

#AI

#opensource

#LLM

Runpod accepts Bitcoin, but only through crypto . com, so kind of useless. I've yet to come across a good cloud GPU service that respects privacy.

Thank you for that reference. I looked them up and looked at their Terms of Service. First time for me : you can't even copy-paste from their ToS page and the source of the webpage also doesn't show the text. (Print as pdf did create a file I could play with though).

Clause 2 related to intellectual property says we can't use the "Content" (whatever that means) for commercial purposes, only personal purposes. More importantly, it seems to say the Content is theirs (and so presumably what would be created in response to our questions and prompts?). I'm not sure what that all would mean when using an LLM.

I didn't even know crypto.com was still a thing. I thought they were toast! And I never knew there was such a thing as crypto.com payments. Presumably they're like a bitpay? I'll look at it a little bit more.

Lunanode not only accepts bitcoin, they do so through btcpayserver and they accept lightning. I love that service. I wish they also offered GPUs!

The search for a GPU-renting, bitcoin-accepting, friendly ToS-offering service provider continues...

Oh, now I see what the crypto.com payment method is. One needs a crypto.com account. Can't seem to be able to pay on Runpod with bitcoin without that account. I thought it was like bitpay (which is bad enough) but it's even worse. Oh well.

The search continues.

Looking to try out a VPS service paid with bitcoin.

First search took me to vpsservice.com. It literally required personal identification with drivers license or similar.

Batshit crazy.

Took to searching on nostr instead and will now be looking into lunanode based on your mention.

Thanks.

ChatGPT4 the only reason so far is because it is on iOS

[804986]

Poe.com has them all. It's quite useful.

I haven't used POE yet, thanks. I'll give it a try!

nostr:npub1a76gf5pyqlwrsl96y6m7x80h7y2uhx5h2qcm62r63q6ppw5l68kqu87y46 is based on GPT4All (I would need to lookup the exact model and I'm not on my laptop right now)

GPT4ALL is a great program. I really hope they continue to expand it's capabilities with more plugins.

I like how it integrates into Python. It was pretty easy to combine nostr + AI in Python :)

What a time to be alive.

And the most exciting things are ahead. This space will probably look completely different in only 6 months 😂

GPT4 and Claude2. I like the large context window of Claude and the way it summarizes but am disappointed in them not opening up their api to me...

Context window is a bit of a hurdle to be overcome, especially for chatbots and storywriter models.

Commercial LLMs have the upper hand in the early stages, but open-source models have so much potential... And you won't have to deal with restricted access.

I'm using GPT4 on a daily basis. Mostly because I'm happy with the output, and it was easy to on-board. But I'll look into platypus.

Flat-t5-XL

Ok, I had to Google this one 😅

Are they still working on this? I'd love to see more competitors to Llama in the open source space.

nostr:npub1lj3lrprmmkjm98nrrm0m3nsfjxhk3qzpq5wlkmfm7gmgszhazeuq3asxjy I’m running llama2 and code lama locally on my laptop. Lot of fun. I think only the 7b models. Wonder if I could run 13b I have 24 gb ram.

Really want to be able to feed it docs pdfs etc. currently only runn inch in command line via Ollama

You could give GPT4ALL a try. It has a built in plugin that can reference local docs. I find it does a good job summarizes concepts, but not so great at pulling out specific information.

24gb is sufficient to run 13B models at 4 or 8 bit quantization, and some will fit at 16bit 👍

Thank you for the response and insight. I’ll try out some 13b models today.

I noticed there are some ways to hook llama2/codellama into vs studio. I think using “continue” was one of them.

I’d like to do that and have it evaluate some nostr protocol code.

Following you for more llm discussion 🤙

GPT4 because there are a lot of tools that integrate with it easily, which makes my life easier.

I use GPT4 for development. Mainly brainstorming ideas as the code it produces rarely works as is for my needs.

I play with Llama2 models locally all the time but haven't found them as good for my needs.

ChatGPT as the baseline, now experimenting with Llama.