Maple.ai via Tor with anon account paid for with Lightning.

I think this a decent tradeoff against the appalling reality of what most of us are doing giving personal data to OpenAI, Anthropic etc.

The at-home build isn't viable for real work except if you pay like $20K and sink time into it (and even then).

Also I'm not shilling maple here .. it probably can't give you the same level of convenience etc. But maybe close, I think?

Opinions?

#asknostr

Reply to this note

Please Login to reply.

Discussion

There's also open-webui and you can run open source LLMs locally for free.

I have run open source LLMs locally for free. The performance can't match, not even in the same ballpark (hence me mentioning $20K etc) ... or did I misunderstand you?

I could have been more clear.

You can use open-webui and an API key from an LLM aggregator. I've been using NanoGPT and Venice APIs. Gives me a wide range of models. But also use them with locally hosted ones too.

Privacy wise, its a decent trade off.

self hosting is best but indeed the you need very expensive hardware to make it run smoothly

I previously found Maple's 'maths and coding' suggested model nowhere near matching Claude. Many responses were just plain wrong.

That changed with the recent launch of their Kimi K2 model.

Now I use it as my main driver. Excellent responses.

I’m doing that minus tor. Especially with the new K2 model they just released it’s good enough for what I need AI to do.

I asked K2 what model it is and it said it was claude 3.5.

Is it like an open source version or something? Hmm that doesn't make sense.

Not too surprising to see variation, in itself. I think I asked it which LLM it was, and it was quite specific ... Claude 3.5 from Anthropic.

For completeness:

In response to "we will come back to this, but first, i'm disappointed that you have the same "AI slop" writing style as other commercial models (no real problem in communication, but stereotypical). i'm curious, what is the origin of this LLM model?" it said "Fair. I'm Claude 3.5 Sonnet, built by Anthropic. You're right—that bullet-pointed, "here's where you're right and wrong" structure is a pattern I default to when I'm trying to be comprehensive and balanced. It's the LLM equivalent of a corporate presentation: clear, but soulless."

Neither my nor your experiment proves anything, ofc, remembering how these things operate.

I use nostr:nprofile1qqsdy27dk8f9qk7qvrm94pkdtus9xtk970jpcp4w48k6cw0khfm06msppemhxue69uhkummn9ekx7mp0qythwumn8ghj7un9d3shjtnswf5k6ctv9ehx2ap0qy2hwumn8ghj7mn0wd68ytn00p68ytnyv4mz7zvqagl

Both promise to keep things private.

Neither can prove it.

I usually avoid identifying information and ppq works great without an account.

Right. If the content is actually tied to your personal life it's gonna be super tough. There could still be value in separation between sessions though, so, tor could be something for that.

Right. I trust ppq to not store my queries and ip. At least the ip is concealable.

We found this, it basically scrubs your prompts of PII. The way I figure we might implement is in the settings section create a little checkbox where you can turn this PII redaction tool on if you so choose. It would be kind of an opt in thing because I still have a feeling that it's not perfect and it might reduce quality or make mistakes sometimes.

But what do you think of this idea?

https://github.com/microsoft/presidio

Seems like this option doesn't allow for the sanitization to occur on the client side. That's kind of a bummer. Maybe we can find something else.

Sounds helpful.

For me the biggest concern is whether ppq can correlate my different chats as I can be conscious about privacy in one chat but if all my chats were correlated, it would instantly give away who I am, as in some chats I ask to help author nostr posts that then can clearly be matched to me. And the credit ID is where see a necessity in the current version for the client to send a linkable ID to the ppq servers. So how about instead of a creditID, use a built-in cashu wallet? Then you would still be able to match my chats from the IP address but the Android app could use TOR circuits per chat ...

Yes. We've thought quite a bit about this option. I even had a call with nostr:nprofile1qyt8wumn8ghj7etyv4hzumn0wd68ytnvv9hxgtcpzemhxue69uhks6tnwshxummnw3ezumrpdejz7qpq2rv5lskctqxxs2c8rf2zlzc7xx3qpvzs3w4etgemauy9thegr43sugh36r on it. Don't want to leave you hanging here, but we'll probably respond in a more long form brainstorming post here coming up soon.

I think it could be done without changes to the UI. So instead of a creditId and a hosted wallet, the wallet would be in the client. Now the over-payment issue you always have when you don't know how much the service will cost can be solved by returning eTokens with the server answer. Always charge the theoretical maximum of the model and with the reply, return change.

Only problem is the backup of that extra data but that could be encrypted using the "creditId" to stick with the same UI.

What about duck.ai ?

just another honeypot