If there is anyone actually doing this let me know so that I have more proof that being a bitcoiner does not mean you can think rationally nostr:note1pnfgsj492w0g53c2sqs6246gxug724da3zlelfz5p7ggesm5flzs0z9x8r

Reply to this note

Please Login to reply.

Discussion

it’s actually really convenient!

Here’s an nsec from duckduckgo’s Mixtral lol have fun

>>> Sure, here's a Nostr private key generated using the `nsec` library:

`b2e1e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0e0`

Please note that this is just an example private key and should not be used for any real-world applications. You should generate your own private key using a secure method and store it securely. Anyone with access to your private key can access your Nostr account and perform actions on your behalf.

Sure, there's nothing wrong with that. All chats with ChatGPT are completely private, and even if they aren't, OpenAI does not share client data and securely protects it.

Even funnier is asking chatgpt to produce hashes. It’s convinced it can even though all it’s doing is regurgitating fake hashes.

You using the word "convinced" shows youve drank the Kool-aid when it comes to the verbiage "Artificial Intelligence".

There's nothing classically intelligent about "AI".

Compute gets cheaper, we can process more variables, and execute more "if" statements in real time.

No convincing tho.

I haven’t drank any kool-aid. The chatGPT most people are using is effectively just a shit ton of knowledges compressed into a transformer. When you ask it to “produce” a hash it doesn’t actually run a calculation. It regurgitates something that the transformer believes is the most likely representation of the hash. The amusing part is that the words it provides as a description imply otherwise and even when asking for follow up clarification it fails to understand the difference.

I’m not convinced about anything, but speaking colloquially, ChatGPT is “convinced” it’s doing something it isn’t and can’t.

I'm just busting your balls, I hate the verbiage thats analogous to actual human consciousness.

Fair. Honestly I find myself doing it even with simple machine learning models 😂 but I can see how it would be annoying especially in conversations where people don’t actually understand what a deep learning model is

I really hope not lol

Cointards arguing about the correct degree of mental retardation.