You’re definitely going the best route by asking for/gaining routine feedback from #[2]​ and any other users that identify as blind or have degrees of blindness foremost.

The NIP as you have it works though 💜 It would say Image Description and a user could drop in an image description. Could even be shorted to ID as the text; people either know or will gain understanding as to what it means.

In the interim, users aiming to make their posts most inclusive can type ID or Image Description above their photo and a screen reader will pick up the visual.

Separate/along these lines: users can Camel Case their hashtags to make them easier to read for people living with low vision or conditions like dyslexia. Like: #ImOnNostrNow v #imonostrnow

Reply to this note

Please Login to reply.

Discussion

> It would say Image Description and a user could drop in an image description

Is “user” here the one that is uploading the image?

> Could even be shorted to ID as the text

By “ID” do you mean the image name (for instance “funny-cat.jpeg”. without the suffix)?

CamelCase hashtags is a new one for me. Good suggestion - I’ll request we can automate this.

Ah cheers!

Yep, the user is the person uploading the image.

ID stands for Image Description. Just a shorter way to write it. So in #[8]​‘s example, the text box would have a background prompt of either ID or Image Description =>

then the person uploading the photo adds their description of the photo.

Word, that’d be cool! 💜

What if user does not want to type an image description?

Would it be acceptable to have the image file name as a default/starting point as ID?

I know sometimes image names are random characters. Would have to think through this.

Yeah, only trouble would just be if the image file is mislabeled. May get more pushback for tons mislabeled than just the option of individuals adding IDs.

Definitely overdue in general though and still not in broader consciousness. I think Twitters was like .3 percent of people using IDs when they had metrics/A Director of Accessibility for a shortwhile. Other sites don’t really even track stats yet.

This is a pretty big one. What do you think #[7]​ ?

Are there any open source ML image to text synthesizers/characterizers we could use?

I don’t know about open source but Google ML Kit supports image labeling for 400+ categories. https://developers.google.com/ml-kit/vision/image-labeling

Actually, iOS supports the same functionality out of the box. https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml

Fysa thread, this is how the interface appears in Bluesky.

They use the wordage Alternative Text/Alt Text + have an indicator top right to edit.

📝