I think those are all good questions. Function calls over relays could put a lot of pressure on relays as data stores, but I think it'll be ok if we combine hash-based syncing with caching.

Using relays as brokers is also ok in many situations because DVMs basically end up compressing the results that relays need to serve through de-normalization. In other words, the relay only has to serve the full dataset once, and DVMs create a single number (as in the case of count) or small result set (as in the case of search/recommend). Caching the results for a short time could likewise reduce the load associated with calculating a result multiple times. You're right that for higher-volume/frequency/cardinality requests having DVMs also serving an http endpoint would be an improvement. @uncleJim21 wants this. I think we should exhaust the possibilities of DVMs before adding yet another interface, but I think we will eventually need it.

Reply to this note

Please Login to reply.

Discussion

No replies yet.