You can also talk to Eso Level Estimator by DM: nostr:npub1zjmlr0dyue8exympgh3xy2wyqm73kc8s7q2cua9rdaqvrl6aw9cs70r5gf or through web (which just sends Nostr DM):
This is cool. I have OpenWebUI running with local ollama on my Mac with local models, tunneled through holesail to my nginx.
I also run my Venice ollama-like proxy (link below) to access the LLaMA-3.1-405B model which does not run on my laptop.
And I access it as a PWA from my GrapheneOS phone.
What you see on the video then is:
- I do the first inference on Venice's LLaMA-405B using my project: https://pay.cypherpunk.today/apps/26zEBNn6FGAkzvVVuDMz3SXrKJLU/crowdfund
You can also get lifetime Venice Pro account there.
- Then I decide to switch to private local inference with Gemma2-27B, which runs on my local Mac
- Then I turn it into picture using the MidJourney prompt generator:
https://openwebui.com/m/hub/midjourney-prompt-generator:latest/
- (resulting image is not generated through Open WebUI, only through Venice's FLUX model with the Pro account)
- Then I ask what the eso-level of this conversation is with my Eso Level Estimator 8000:
https://openwebui.com/m/moonlestial/eso-level-estimator-8000
The future is now.

Discussion
Estimated eso level: 4 - 💡
Reasoning: The prompt describes a tool that interacts with users, potentially analyzing their input and providing responses based on some internal model or framework. This aligns with concepts like behavioral studies, decentralized systems (as the tool is accessible through various channels), and potentially even artificial intelligence depending on its complexity.
If you're a coder you should also use codestral:22b best part: understands nostr code! 😛