What if we had a Wikipedia where users could attach proof of work to competing edits and the one with the most proof of work would be the default view, but competing edits could also be seen in order of their PoW
Yeah international bodies will issue statements and try to rally support for x or y local action. Some will be too inept to follow the asks, some will ignore, some will aggressively dissent. We just keep preaching the good word of individual liberty. 🫂
Man, I love the atmosphere period. Sometimes I just sit there and do nothing but breath it in until my lungs and my heart are full.
You’re referring to the RLHF?
You know… I don’t know anything about you or your parents. But people often regret doing this. 🫂🫂🫂
Who’s abandoning lightning?
🤔
nostr:note19dtka7muavskhcqwd3tk0z4eddnddqxylu866dqd834phhd5gyss8dfeqx
If that’s the only one that you have messed around with and maybe that could be part of the overpromising vibe you’ve gotten. Llama2-7B is quite a bit less capable than the full llama2 model. And that model is quite a bit less capable than ChatGPT4. Token generation on your embassy is probably a lot slower than what I’m used to using open AI servers.. though self hosted is awesome. I want to get some hardware to do that myself someday.
What model(s) are you using?
I don’t know if it’s important but it seems like it’s important to stress that it’s not a search engine at all. Maybe there are times when they are useful as a search engine, but mostly it’s a thinking engine.
And when i use it as a search engine, I think of it as almost the opposite of robust - I think it behaves more like one person‘s opinion, rather than a summary of what humanity is saying about a topic.
Totally agree that the future is speculation, but also projecting that past trends in capability improvements will continue (especially as it gets exponentially more funding and attention) doesn’t seem too crazy.
One thing search engines can’t do is think with you about something that’s never been asked before. These can.
That may be true, but if you compare text generation from today to 4 years ago, it’s exponentially better today. Self driving isn’t really like that, and even it has improved a lot in the last 10
Well, maybe I misunderstand what you're saying 🫂
"They are very robust search engines, nothing more."
"[...] There is nothing impressive with any of these. Clippy 2.0 at best."
Explaining why "I’d argue people are the same in the ways that matter for short term economic productivity" is relevant:
When I read the above quotes, to me it seems like you're saying that AI is unlikely to have much more effect on maybe the economy and wages than clipboard managers like clippy.
But I'm point out that many people are currently mostly getting paid for doing exactly what you say this is limited to: "finding, analyzing, and relaying relevant information" And if we can get it to do that for robotic movement planning then again many more people are getting paid for something this machine is doing. Machines are likely to get better, and outcompete people.
If that's true then it's not "nothing more". 🫂
I’d argue people are the same in the ways that matter for short term economic productivity
What - if it were true - would change your mind?
I think you offer more than the data you have access to. But most of your employability likely comes directly from the data you have access to. That’s why people who are considering hiring you want you to have relevant experience instead of 5 years of work as a Hunter gatherer.
Technically (and maybe relevantly) LLMs don’t actually have a search step. They just read some text and then continue it in ways that are reinforced by its training. So they aren’t searching and copying and pasting. They are studying humanity, and providing you a mirror of it according to your prompt. Feel free to ask for the average mirror, or the genius mirror, and put it to work.
Today they make surprising mistakes, but there doesn’t seem to be a reason to think that they always will. I mean even today if you have it do some self-criticism on what it’s beginning to think it often corrects mistakes. And that’s without a vastly smarter model. Seems like vastly smarter models are possible and coming.
nostr:note10n8qsmel2t4jg7vs4999laulqv88sjya0ehr7emvkad260tr2w2sex636y
When computers are able to do entire jobs better than humans, those humans will respond by retraining into jobs that they can’t be replaced in.
This will continue until humans are only doing those jobs that machines can’t do. These may mostly be providing for the emotional needs of others.
The problem is that labor supply will 20-100x while demand is dropping by a similar amount.
Human earnings are going to zero.
I don’t make the rules. (I only guess at what they are)
#AI
Custodial*
The difference is kinda crucial 😂😅
#m=image%2Fjpeg&dim=1080x1221&blurhash=%7BVQ8%7DJOqX-RPXSbbV%40kCyEbvkCjFkCjZe%3AkC.TaJR5o%23jFe.fkay%25NnNoeozniofkCV%40n%2Cj%5BkWf6oLjFfkfkt%2CWEVtWBa%23aybFkCtQWCWCaea%23kWa%7BjFofj%5Bj%5BodjuWBaeaysSjYj%5Bofj%5BjFj%5Bf%2B&x=dc856aa26ed0cbbe9e18b3b4a1f722cf956e48567ef8958ae07d04fb06beb8af