Listened to this great talk between nostr:npub1h8nk2346qezka5cpm8jjh3yl5j88pf4ly2ptu7s6uu55wcfqy0wq36rpev and Dhruv, where they discussed the tendency of AI solutions to get decentralized.

Please, share some existing options, Guy. We need to break free from the centrally controlled AI tools ✊

https://fountain.fm/episode/4dTzJJgLxwJMcdwhfDg2 nostr:note1xzwdwwt5mqvv4pypwy9zsldvuz7zurzx9mcftu3t0rzucg23gt7s922gak

Reply to this note

Please Login to reply.

Discussion

Running your own is still difficult and ChatGPT still has better results than the smaller models you can self host. There are a ton of advancements in both efficiency and even already in chips that I think will change this dynamic in the coming years, along with the major corporate options neutering themselves to “be safe.”

The sovereign options are good and I’ll be covering which ones I like the most on Ai Unchained, but I still think it’ll be a few years before we have solutions to the important hardware limitations.

Thanks a lot for your answer and especially for AI Unchained!

I partly agree. It really depends on what one is using the models to accomplish. With LocalAI and vLLM, it's pretty easy to use an open-source model as a drop-in replacement for the OpenAI API. Companies like OpenAI are working towards AGI, while many challenges can be solved using a Mixture of Agents (Experts) approach to AI agents without need for expensive hardware. With MemGPT, AutoGen, and many others leading the way toward autonomous agents. NousResearch beat OpenAI to announcing a 128k context window using YaRN. A year from now, I'd say it will be a non-issue, or we'll have context windows of 3M+. The Law of Accelerating Returns rings very true in the LLM and generative AI space. If one is looking for unaligned (uncensored) models, Dolphin is most likely the best in terms of size versus performance at 7B parameters. Many 7B models are now equaling the performance of much larger 70B models like LLaMA2. We can already overcome a lot of hardware limitations by quantizing models (see GGUF and AWQ). At the current exponentially growing rate, in a year, we'll more than likely have AGI.

My company, nostr:npub14hujhn3cp20ky0laq93e4txkaws2laxp80mfk3rv08mh35qnngxsg5ljyg (proudly on nostr!), is releasing some very cool stuff in the near future related to AI agents and domain-specific, fine-tuned, lightweight, and efficient models that will run on edge, mobile, and IoT devices. There is a lot of non-OpenAI projects out there that are Open Source and transparent and more by the day.

👀

If I had to paint broad strokes I would say:

• Open source Ai benefits from a massive variety of highly specialized LLMs and models for various tasks. The magic is in knowing how to intelligently use them together.

• The major LLMs tend to be much better at any general task or answering any questions.

So if I am looking for a quick way to find an answer or to be pointed in the right direction - I’m using ChatGPT

If I’m trying to make a workflow or use something in an automatic sequence and building a specific script to accomplish a set of tasks, open source is definitely the way to go.

Check put ollama.ai .... You can run a model on your laptop!