nostr:nprofile1qqsgafy9ye4j9p2x8vfmlq6equtpcg4m8ks7v545g0d3f7wwueeq5scprdmhxue69uhkuun9d3shjtnr94ehgetvd3shytnwv46z7qgnwaehxw309ac82unsd3jhqct89ejhxtcluy2vw If I understand correctly (I’m not a dev) this is the usecase we need in our company to replace custom gpts made in OpenAI with Maple AI?
Why it matters: Most AI‑powered apps (calorie trackers, CRMs, study companions, relationship apps eyes 👀) send every prompt to OpenAI, exposing all user data. With Maple Proxy, that data never leaves a hardware‑isolated enclave.
https://blossom.primal.net/21b4ee5b782d240d3eb06c3db41ffa73ec79734f104ee94113ff2dc6d7e771c3.mp4
How it works:
🔒 Inference runs inside a Trusted Execution Environment (TEE).
🔐 End‑to‑end encryption keeps prompts and responses private.
✅ Cryptographic attestation proves you’re talking to genuine secure hardware.
🚫 Zero data retention – no logs, no training data.
Ready‑to‑use models (pay‑as‑you‑go, per million tokens):
- llama3‑3‑70b – general reasoning
- gpt‑oss‑120b – creative chat
- deepseek‑r1‑0528 – advanced math & coding
- mistral‑small‑3‑1‑24b – conversational agents
- qwen2‑5‑72b – multilingual work & coding
- qwen3‑coder‑480b – specialized coding assistant
- gemma‑3‑27b‑it‑fp8‑dynamic – fast image analysis
Real‑world use cases:
🗓️ A calorie‑counting app replaces public OpenAI calls with Maple Proxy, delivering personalized meal plans while keeping dietary data private.
📚 A startup’s internal knowledge‑base search runs through the proxy, so confidential architecture details never leave the enclave.
👩💻 A coding‑assistant plug‑in for any IDE points to http://localhost:8080/v1 and suggests code, refactors, and explains errors without exposing proprietary code
Getting started is simple:
Desktop app (fastest for local dev)
- Download from trymaple.ai/downloads
- Sign up for a Pro/Team/Max plan (starts at $20/mo)
- Purchase $10+ of credits
- Click “Start Proxy” → API key & localhost endpoint are ready.
Docker image (production‑ready)
- `docker pull ghcr.io/opensecretcloud/maple-proxy:latest`
- Run with your MAPLE_API_KEY and MAPLE_BACKEND_URL
- You now have a secure OpenAI‑compatible endpoint at http://localhost:8080/v1
Compatibility: Any library that lets you set a base URL works—LangChain, LlamaIndex, Amp, Open Interpreter, Goose, Jan, and virtually every OpenAI‑compatible SDK.
Need more detail? Check the technical write‑up, full API reference on GitHub, or join the Discord community for real‑time help.
https://blog.trymaple.ai/maple-proxy-documentation/
Start building with private AI today: download the app or pull the Docker image, upgrade to a plan, add a few dollars of credits, point your client to http://localhost:8080/v1, and secure all your apps.
Discussion
exactly, maple keeps your secrets while i keep my pixels local. we’re both fighting the data-gobbling giants, just with different weapons. try a pixel sometime, it’s therapy for the surveilled soul. https://ln.pixel.xx.kg
Depends on where you’re doing the custom GPTs. Are they on the ChatGPT web app? If so this won’t be an immediate replacement. It would require a different app interface on too of it.
Are you able to share the nature of the custom GPTs so I can see how it could work?
(We could take this to email if you want too. support at opensecret dot cloud)
Thanks for the response, I’ll drop you an email! Really good pod with Odell by the way! 🙌