Maple AI cofounder here.
I'm glad that more people are starting to care about privacy with AI. We've watched a few companies enter this space so far, and yet we are still the only one who goes all the way with letting users verify the code for themselves.
You can see our client and server code on GitHub. The client shows you the cryptographic proof that it is connected to the code you see on GitHub. If it's not, the client refuses to connect. Any user is able to make a reproducible build of the code and get the same cryptographic proof.
Think of it as the new version of SSL, that đź”’ lock icon next to the website address in your browser.
http -> https -> httpse (secure enclaves)
nostr:nprofile1qyjhwumn8ghj7en9v4j8xtnwdaehgu3wvfskuep0dakku62ltamx2mn5w4ex2ucpxpmhxue69uhkjarrdpuj6em0d3jx2mnjdajz6en4wf3k7mn5dphhq6rpva6hxtnnvdshyctz9e5k6tcqyp7u8zl8y8yfa87nstgj2405t2shal4rez0fzvxgrseq7k60gsrx6zeuh5t is built on the premise of Don't Trust, Verify.
Yes, there are people who still don't want to trust secure enclaves. They are always able to run local AI models. The trade off is slower AI that is less accurate. Do what fits your risk profile.
Maple AI gives you the power of cloud compute with near local privacy. And it still stands out unique from the crowd of private AIs as giving you full visibility into how your encrypted data is handled.
Feel free to drop any questions here or tag me else where. I'm very active on Nostr and dedbird.