I see. Is it that expensive to do inference on the GPU?

Reply to this note

Please Login to reply.

Discussion

It's a API wrapped as a DVM and that's the service cost of the API

I'm not running my own model or hardware

I can make a much cheaper DVM, but the largest part of the expense is the voice cloning for the service I'm using