David Young's "Learning Nature" series -- AI images generated from pictures of flowers Young took -- is quite lovely: https://davidyoung.art/project/learningnature.html
nostr:npub1gpq8xa3zq5lxezwkgscjfqekjz4ktamce4u0gs6g7frdje0asyksuhlvnj
100%
And as ever, universal design principles wind up being good for everyone
I don't have dsycalculia or any perceptual issues with numbers, but damn those legibility techniques make it *way* easier for me to parse big numbers
nostr:npub1kv2te35v3f5q03uksdj7r84dfdtdtxnhsfxgkdq7ztlnztlw96xsknjw84
Yes!
When I researched the history of personal photography for Smithsonian a few years back, I remember one historian remarking that the problem of color cascaded on through tech:
- lousy early film --> disproportionately few photos of Black Americans get taken --> perception that market is mostly white --> digital cameras are just as bad --> on early internet, digital pix are disproportionately white folks --> data used to train visual AI is disproportionately white --> etc
nostr:npub19l8dlqme2csmqxu0muyg2nwlj8239gkwgwalvfgwd8hl0cvgzdxqvczuv9
Those are adorable little cars!
nostr:npub1mfa7rpjwyup5yeg58cs3e5e576jxt5u75mvq6mqs9ed0k2yucsfs8t09w4
Agreed — without smart policy to smooth these transitions (and frankly even with) — you get serious culture wars
This is something I wrote about a few months back when that Google engineer was claiming that LaMBDA, Google’s large language model, was sentient
When you looked at some of the records of his chats with the bot, what leapt out was that the chatbot’s replies evoked *vulnerability*
Sherry Turkle has written a lot about how this is a classic trick of bot makers and robot creators — a bot that seems needy or fallible seems more human: https://clivethompson.medium.com/one-weird-trick-to-make-humans-think-an-ai-is-sentient-f77fb661e127
4/4
A superb essay by Karawynn Long pointing out a central challenge in today’s language-focused AI — which is the assumption that fluency with language *is* intelligence: https://karawynn.substack.com/p/language-is-a-poor-heuristic-for
As Long notes, many folks with autism are exceptionally intelligent but don’t possess high fluency with spoken language — and they get regarded as unintelligent: https://karawynn.substack.com/p/language-is-a-poor-heuristic-for
(via nostr:npub1e0a6n84krygh2qc3ajptu4fa4ma0qy53acz8smsyhztgxljxkghsureww0)
1/x
The whole essay is crackling and worth reading in full; Long predicts that the penchant of large language model chatbots for screwing up basic facts will nudge users to stop associating language with intelligence …
I hope she’s right, though the counterargument would be that humans have for aeons been perfectly happy to ascribe intelligence — and, indeed, absolute genius — to gladhanding bullshitters
2/x

I really enjoyed this point Long made — which is that ChatGPT users seem particularly affected by the bot’s *apologies*, when it gets something wrong and is told “yeah no that’s wrong”
They think the bot is learning
But it’s not. That’s just a canned reply
But it has a powerful psychological effect on human users
3/x


A superb essay by Karawynn Long pointing out a central challenge in today’s language-focused AI — which is the assumption that fluency with language *is* intelligence: https://karawynn.substack.com/p/language-is-a-poor-heuristic-for
As Long notes, many folks with autism are exceptionally intelligent but don’t possess high fluency with spoken language — and they get regarded as unintelligent: https://karawynn.substack.com/p/language-is-a-poor-heuristic-for
(via nostr:npub1e0a6n84krygh2qc3ajptu4fa4ma0qy53acz8smsyhztgxljxkghsureww0)
1/x
nostr:npub1tesapyjma0h2zelnlts2yahhenfflys2y30dna907452qc8qvnmsfc5hzz
In the US, the argument is "people *want* huge-ass cars and nobody likes paying taxes so politicians would never have the guts to support this", mostly
An argument for taxing vehicles by weight, including EVs: https://slate.com/business/2023/01/electric-cars-hummer-ev-tax-fees-weight-joe-biden.html