## Why I am sidelining DVMs for now
When I started to grow tired of random posts on nostr I had a plan to create a DVM to tackle the problem of noise. I wanted a feed that caters to people like me in it for the professional discussions and meaningful FOSS work.
I used the LLM approach because simple hashtag parsing can only do so much. First I thought I would fine-tune a BERT-like model that should be easier to self-host. It turned out to be a harder goal to achieve than I expected.
I wanted to use GPT4-o to help me generate data for the fine-tuning. Sadly that was a bummer. It screwed me over so many times with garbage, I got exhausted.
Then I downloaded a bunch of high-rated stackexchange posts to be clustered and used for the fine-tuning.
This already took me more time than I wanted and realized that for a PoC I might as well use openai api. So did that and started experimenting with different prompts for GPT4-o mini.
In the meantime I got acquainted with the python based nostr-dvm framework and setup the basics I need for a DVM service.
After some grind I got everything working but still was not very pleased with the classification results.
Now, I know I could have put in even more time to really nail that prompt but I kinda lost my faith and appetite. I successfully use AI to learn and generate rudimentary code and chat about ideas. But what I needed is to generate a feed without manual intervention. AI people tend to recommend techniques to improve results that are borderline witchcraft. And every single time GPT finds a way to hallucinate plenty of stuff that I did not expect, no matter how hard I seem to try. This is my experience from months of daily interaction with GPT4-o too, which is much better than the mini version.
So no, I won't get trapped in the ai hype again. It is what it is: without human oversight these things are still worthless. I'm not aiming for a 90% usable thing, and I don't have a straightforward way to get to a 100%. No one has because these stochastic models are not AIs at all. They are mimicking parrots, nothing more. And this direction is a dead-end if you ask me.
All in all to customize a high-quality feed I tend to agree now that something like #nostrscript seems to be the better way to go from nostr:nprofile1qqsr9cvzwc652r4m83d86ykplrnm9dg5gwdvzzn8ameanlvut35wy3gpz3mhxue69uhhyetvv9ujuerpd46hxtnfduq3qamnwvaz7tmwdaehgu3wwa5kuegpzemhxue69uhhyetvv9ujuurjd9kkzmpwdejhglzevy3 .
It has the human element but is enhanced with the right tech to be much more than just picking hashtags to follow.
We might see a bunch of better use-cases for DVM-s but this is my overall sentiment right now.
I wonder if there are any examples how it failed
Yeah, watch your layoff, have fun
Zuckerberg is boasting about AI replacing devs in 2025. I see it this way: Zuckerberg spooks devs so it'll be easier for his HR department to insist on lower compensation, and spooked devs usually eat it because they're blinkered in general with too much work.
Hmm, there's a guy asking whether reaching out to former employees of the company on Linkedin could help him decide on a job offer. I wonder if the information would be skewed, as those employees left the company for a reason, and their opinions are certainly biased. Additionally, what is their incentive to respond seriously? You'll likely only get opinions from the work-unfocused, highly extroverted cohort, while others will ignore it to save their precious time as highly-paid professionals.
Every time you speed up clicking controls on a website and get slowed down by a CAPTCHA, realize that a properly written bot has already parsed the entire site today, along with thousands of others.
Every time you click the 'I'm not a robot' checkbox, realize that 100 bots have already passed through the page today and you're just performing an action in vain.
A guy made a thoughtful point about centralized systems: 'Don't ask why they were banned from the platform - ask yourself why banning was possible in the first place.'
very suspicious of the way it's going with NIP-32, looks like censorship practice is trying to spread to Nostr network.
just did a spot check comparison of DALL-E 3 and YandexArt, the 1st one is far more flexible. #AI
"Microsoft Office, like many companies in recent months, has slyly turned on an “opt-out” feature that scrapes your Word and Excel documents to train its internal AI systems. This setting is turned on by default, and you have to manually uncheck a box in order to opt out.
If you are a writer who uses MS Word to write any proprietary content (blog posts, novels, or any work you intend to protect with copyright and/or sell), you’re going to want to turn this feature off immediately.
I won’t beat around the bush. Microsoft Office doesn’t make it easy to opt out of this new AI privacy agreement, as the feature is hidden through a series of popup menus in your settings:
On a Windows computer, follow these steps to turn off “Connected Experiences”: File > Options > Trust Center > Trust Center Settings > Privacy Options > Privacy Settings > Optional Connected Experiences > Uncheck box: “Turn on optional connected experiences”"
https://medium.com/illumination/ms-word-is-using-you-to-train-ai-86d6a4d87021
#Microsoft #AI #GenerativeAI #AITraining #MSWord #Privacy #Word
So, next step, they silently start to train on your laptop files using an encrypted stream so you cannot prove it
#AI #OpenAI #firearms
"OpenAI has cut off a developer who built a device that could respond to ChatGPT queries to aim and fire an automated rifle. The device went viral after a video on Reddit showed its developer reading firing commands aloud, after which a rifle beside him quickly began aiming and firing at nearby walls."
https://gizmodo.com/openai-shuts-down-developer-who-made-ai-powered-gun-turret-2000548092
So hypocritical of openai
#ai eliminates the burocracy bullshit that people have been creating for millenia
