This is cool. I have OpenWebUI running with local ollama on my Mac with local models, tunneled through holesail to my nginx.

I also run my Venice ollama-like proxy (link below) to access the LLaMA-3.1-405B model which does not run on my laptop.

And I access it as a PWA from my GrapheneOS phone.

What you see on the video then is:

- I do the first inference on Venice's LLaMA-405B using my project: https://pay.cypherpunk.today/apps/26zEBNn6FGAkzvVVuDMz3SXrKJLU/crowdfund

You can also get lifetime Venice Pro account there.

- Then I decide to switch to private local inference with Gemma2-27B, which runs on my local Mac

- Then I turn it into picture using the MidJourney prompt generator:

https://openwebui.com/m/hub/midjourney-prompt-generator:latest/

- (resulting image is not generated through Open WebUI, only through Venice's FLUX model with the Pro account)

- Then I ask what the eso-level of this conversation is with my Eso Level Estimator 8000:

https://openwebui.com/m/moonlestial/eso-level-estimator-8000

The future is now.

https://m.primal.net/KGXS.mp4

Reply to this note

Please Login to reply.

Discussion

You can also talk to Eso Level Estimator by DM: nostr:npub1zjmlr0dyue8exympgh3xy2wyqm73kc8s7q2cua9rdaqvrl6aw9cs70r5gf or through web (which just sends Nostr DM):

https://juraj.bednar.io/assets/esolevel/index.html

If you're a coder you should also use codestral:22b best part: understands nostr code! 😛

Wow I'm amazed. I'm at the local ollama stage, you're warp lightyears ahead.

That's quite something! 😅

Very nice.

Which keyboard do you use on Graphene for gesture typing?

GBoard by Google, which does not have Network permission.

I would like to ask the same question.

I try to find a better solution then a Google product

This is the way!

Siiick