Elon’s all server-side thinking I guess but Apple’s product lineup looks set to be a hero for local inference. MLX, Universal Memory and Apple Silicon make reasonably large models accessible to pro consumers and hobbyists. Exo labs + mac studios can get pretty sophisticated jobs done.

In an ideal near-future, households would have a locally run always-on model with full access to the owner’s personal data. It would act as a mediating layer to interact with frontier models to carry out long-horizon tasks for the household without leaking sensitive data to third-parties.

Reply to this note

Please Login to reply.

Discussion

No replies yet.