Thank you for working on this. specifically making sure its optimized. I'm using social-social-graph in noStrudel to build and cache the users social graph.

I was just looking into the code this morning to figure out if it was possible to make the de-serialize / recalculate methods faster so they don't freeze the app. (right now it freezes ~4s when loading a graph of 160k users)

Reply to this note

Please Login to reply.

Discussion

Happy to hear you're using it! I deployed the binary serialization on iris.to today but rolled back because of deserialize / recalc slowness and possibly other issues. I'll keep working on it 🫡

One option I'm interested in is binary-streaming a graph and merging it into an existing graph instead of recreating / replacing, maybe with queueMicrotask steps to make it block less.

Anything you can do to improve the performance will be awesome but if its not possible to make it faster then I'll look into wrapping it up in a worker so it at least does not block the thread

I made recalculateFollowDistances async, it now processes in batches of 1000 and uses queueMicrotask

Looks like that helped a lot. also I was calling the recalculate method every few seconds and it looks like thats not nessisary... so thats a x2 improvement right there :)

The only thing thats still causing the page to freeze is deserializing the graph. it takes about 3s on my machine

I haven't looked into it too much, but it looks like its the follow distance calculation that is taking the longest. do you think it would make sense to store the distance in the serialized graph in order to speed up load times?

This givee me an idea

OpenAI o3 optimized the recalc method, should be faster now. Also changed queueMicrotask to setTimeout, which might block less.

Consecutive calls to recalc while it's still in progress now return the same promise instead of starting a new operation