Interesting! I did some vibe coding and got kind of good results but I was mostly impressed by the speed. Qualitywise it had the crazy idea to crawl whole blossom servers to find blobs it knew were indexed on nostr. That was weird. When I told it that blossom servers might have millions of files it suggested to use pagination to download the files over multiple requests.
Discussion
Yea the speed is nuts but I have a feeling that is because it is not seeing the demand yet. It is some monster provider with huge infra (openAI perhaps) testing out their model before release.
I see "openAI perhaps" but on your website you attribute it clearly to openAI at https://api.ppq.ai/models

How do you not know? What would be the api endpoint to try it out without going through ppq? I'm having size issues. Some 403819 bytes queries run into issues with this and other models that should support a million tokens.
I think we get the "owned_by" from openrouter in this case as we are pulling the model from there.