So you get this world in which all the problems of selective censorship are "solved" not by the app (i.e. the centralized server) censoring things, but by giving the user the tools to censor what he wants. This will obviously break because the censorship is never about what you want to see, but about what you want to prevent others from seeing, so there will be pressure for the app to apply absolute censorship. And the app owners themselves will want to censor some things, like they have done with CSAM from day one, and no one disagrees with that at first, but then as time passes their opinions will change too, just like it happens on every platform.
The same applies to the choice of algorithms: users are free to choose, but only within the limits of what the app allows them and for as long as the app allows them.