I am not familiar with Apples ML workflow, but the idea of FIL is that you can train models in a distributed way without having to pass data between devices, only device specific estimates of model parameters. This process has improved data privacy as no remote acces to the data is needed, reduces bandwit for distributed computing, and it has good theorethical convergence garanties, in the sense that the final solution shared betwen devices will be as good as if you optimize the model on a single devices having access to all the data. It is rather new paradigm so there is a lot of place for improvement.
Discussion
No replies yet.