running locally ? are embedding light mods directly into nostria ? so if I use text to speech on machine.. it will use my hardware resources ?
Just add a bunch of local AI features to Nostria. And right now, I'm testing out the speech to text, which is working pretty well. All of it is running locally on device for full privacy. And I just love it. It's very experimental right now. It's already deployed. You can try it out but remember it's highly experimental. Will improve a lot in the coming days.
https://mibo.eu.nostria.app/1f264cbafb57f0bb8df753e1b59f1df2998f568c25976c55cbed3d1b01ccee2d.webp
Discussion
Model is downloaded and runs locally. It uses WASM/WebGPU if available and runs using Web Workers, so the experience doesn't affect the UI.