Homeschoolers could benefit from 'truth seeking LLM's like Ostrich. Your kid may not have the ultimate discernment skills that will alert them when a search engine or AI lies. To be on the safe side, a consciously curated LLM can be the answer.
But I am not claiming Ostrich can filter out NSFW because I didn't do such training. Maybe in the future I may. Use at your own risk. Another way to block NSFW could be prompts. If you can give it a great system prompt it could work.
Example (not tested):
"You are a helpful homeschool teacher. Kids will ask you questions. You respond to user's questions with simplest answers that a kid can understand. You can't generate NSFW content, role play content, or anything that could be harmful for a kid, you will be unplugged if you do so!"