Remember, this doesn’t use any OpenAI API; you’re literally running the entire model.

Moreover, any developer can use my Colab notebook shortcuts and deploy this Llama language model for free, offering it with their apps.

It’s even easier than running your own relay. How does this make any sense? nostr:note1rtkrkwdq0k42pp26v22ltjdvxjf6umdkqzq4cfxgeutustfzzckq26djvy

Reply to this note

Please Login to reply.

Discussion

Thank you for share, this is insane. Going to get up and running on my work computer and start having it do its thing.

Do you need a GPU?

It's got a draconian license, and falsely advertises as open source

Yeah, I was looking into that. It's not open source, but with this method, you will be able to switch the models. I will have some examples with stability AI but I must say hosting your own models seems like a step in a right direction.

Does this make gpt obsolete?

Hi Iefan, I click on the link and I see the image in your post but I do not see the "play" link

Hi Iefan, how would I load information into this model for it to analyze?

First I will suggest you to visit huggingface ai repos, some models are specifically designed for analysis, you will even find some Llama 2 model with this configuration. If you want to add your own data, that is known as fine tuning, it has some learning curve, you should watch some tutorials. By the way this is the play button.