Thanks for the explanation. I expected as much from it not being open source.
I do have apple silicon but really wanted something self hosted to be mobile accessible. Can’t justify cost of building my own box at home for AI at the moment.
Will take a look at the lifetime Pro. I have a metamask wallet so should be able to get it setup
Well, open source would not guarantee anything if they run it on their servers, you have no way to verify what they're running even if it would be open.
Yes but adds another level of transparency
Not really.
If they want to make good on their pinky promise, they can right now.
If they don't, being open source won't stop them at all. They can log wherever on their infrastructure they want.
But with open source at least someone (e.g. myself) can take what they’ve done and run it how I want. Although that brings it back to my problem of not having the right hardware currently
You can run ollama today.
https://ollama.com/
They have not done anything special for running the models. The models themselves are open source. The value of the service is that they run the models on their infrastructure. If you want to run it themselves, there's plenty of better solutions like ollama and many frontends such as open webui.
Yes I know venice.ai doesn't have any magic sauce. My main constraint is hardware. I have a home server but no capable GPU. A stop gap can be running locally on my Macbook but would love to have something that is accessible anywhere
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed