Avatar
Blake
b2dd40097e4d04b1a56fb3b65fc1d1aaf2929ad30fd842c74d68b9908744495b
#Bitcoin #Nostr #Freedom wss://relay.nostrgraph.net

Seems to be mostly them. Do they render as images or something?

What am I missing? Why do so many posts and profiles have :some_thing: ?

Websocket Connection Headers can contain an ACCEPT key and value - with a preferred wire protocol. It can be both.

My concern is more around parsing JSON should be at least 10-100X faster than your network today on mobile. The network should be the bottle neck… if it isn’t, we have optimisation to do first.

https://arxiv.org/abs/1902.08318

And your next problem kind of becomes buying voters. Find people who value a money over voting and will sell you their vote.

Australian politicians who speak languages other than English literally target community voters who cannot speak English and give them pre-signed ballots to sign. (To clarify, this is branch stacking and less so normal elections). Illegal - but no one caught has had anything happen.

Democracy is largely an illusion today. It’s not fair. It’s not the voice of people. It’s a game that people exploit - the same people who have power, to give themselves new laws, and to keep the game going.

Slight tangent… however voting is very hard to solve fairly and reliably.

I’m not actually sure it has a solution that isn’t centralised - where you have some kind of pre-registration (maybe KYC, but blinded after) - and then maybe vote anonymisation (if blind voting).

Keep in mind double voting isn’t technically just the same identity voting twice (we could detect duplicate reactions from the same pubkey for example), but also someone creating two+ identities and voting a second time.

If you have a closed community of voters, you could perhaps manually share a secret or something - and a double vote would be obvious when you count the people who can vote is less than total votes - at least one fraud vote.

I’m kind of glad anyway, as paper ballots are significantly harder to commit fraud with. Any digital government electronic voting is extremely dangerous and should be rejected as an option.

I could be missing something that may exist.

For clarity, the ‘converse with’ is kind of a “seen in event p-tag groups the most” at present. It’s valuable - however doesn’t directly translate to ‘ranked list of who you message specifically the most’. But certainly it’s which people are most acting in any groups you reside in.

If people see a lot of value in the zap metrics, which I suspect may be the case, I may invest more time to see how far we can get with accuracy for example. At present I’m not sure the spec is solid enough to allow that.

It will never be as complete or accurate are your actual wallet - which is hard to reason with as a limitation as a dev, but also a user.

It’s more an issue that WOS needs to implement the rest of the NIP spec from memory, as opposed to me not supporting them.

I’ve seen some alby processing errors too I’ll have to review Incas they should validate.

At present my zap validation doesn’t support lud16 addresses as well. I wrote support, but it needs to be finished.

Effectively Zap metrics/stats are really experimental and I’m not sure they can ever become complete or largely valuable.

Sounds like my first Gossip experience.. I was confused as. I actually don’t know what I did to make it work other than updating and trying again a few weeks later.

I think you need to pull/merge local your following contact list. It’s a manual process and not done automatically in Gossip.

Yep. It would be experimenting. Not sure it would work anyway.. but interesting.

However the purpose of showing an average is to set a baseline (or current market price/value) - so others can choose to go below, meet it or above mentally.

It depends likely more on how many zaps are an average per Nostr post in general. If posts only get 3-4, it’s less useful. If it’s a popular only fans style post, lots of zaps and the zaps likely continue over a longer period of time for that single post (or a blog post, other example exist too).

Yep. If you provably burn the sats instead of transfer them them.

It doesn’t solve the double vote issue however.

Replying to Avatar Vitor Pamplona

Here are my comments on nostr:npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s 's NIP-94 alternative

Intro: The proposal adds the existing NIP-94 tags (Will calls them image metadata) into Kind 1 event tags directly.

You can see an example here: https://www.nostr.guru/e/f89181db5a8ca1b469a9bce65f72e5640afb598e194fb9320e70edc634ea3a96

This means:

1. If we do only 1 tag per url for all metadata attributes, we might be tied to only 3 attributes per image or some relays will crash: Some relays use a regular relational model for tags with limited columns in their tables. We can't just add as many tag attributes as we want per url. And 3 is deffinatelly not enough if we want to provide good metadata (hash, accessibility descriptor, blurhash, size, decryption info, etc)

['imeta', 'url1', 'hash', 'descriptor', 'size','blurhash', ..]

['imeta', 'url2', 'hash', 'descriptor', 'size','blurhash', ..]

2. If we do n tags, 1 for each attribute, we can have as many attributes we want, but it duplicates the url string in each attribute to allow the client to reassemble the group of attributes to the same url.

Like this:

['r', 'url1', 'hash', '...']

['r', 'url1', 'descriptor', '...']

['r', 'url2', 'hash', '...']

['r', 'url2', 'descriptor', '...']

3. On NIP-94 the event and it's metadata are created only once and reused in all kind 1 messages that use that Event ID. In Will's proposal, every kind 1 that includes the same URL must duplicate all the metadata related to that URL. This means significantly more data storage needs for relays and more data plan use for folks that are receiving the same url in many posts.

4. The proposal uses an encoding scheme for tags that we have not seen in Nostr yet. Essentially instead of having the content directly, every tag attribute will be (see example 1). So, clients now must parse these tags differently than everything else. And in order to parse them, they must know the prefix of every option available.

Summary:

I don't this is a good idea:

1. It requires significantly more data. No matter which option we choose, there will be duplicated metadata everywhere.

2. It requires a new custom encoding inside tags that we need to get a good review on.

3. There is not enough space to add the minimum tags Amethyst already works with in every image.

4. It duplicates the specification of the metadata (field names, possible values, semantic meanings, etc) from NIP-94. We don't need two ways of doing the same thing.

5. It's also not a NIP yet.

I think we’re talking about two distinct use cases here. Hence the contention.

1. Better media handling for kind 1 events.

2. Generic hash addressable file mapping events

It would be ideal if we can somehow share much of the implementation, and at least overlap in language.

For #1, querying for a second event when a single image is the most popular with a blur-hash for nicer UX is a 90% use case. Querying for a kind 1 event, and then parsing and then querying for a kind 1063, and then either http/torrent fetch isn’t likely to perform well for clients with timeline views rendering.

Not all Nostr referenced files need

a kind 1 event pair/parent.

There is the option for relays to optionally embed related/child events, say in a new parent event key, like related_events: [] or similar. I suspect we need something like this anyway. Similar to how the first event a relay sends for that pubkey could embed the profile/meta event - as it’s always desirable (ignoring dupe response data across relay connections).

For #2, it’s building a file system like mapping inode that can be used for all file types and different hosting/access approaches. It’s more usable by all event kinds in future.

I think we will likely end up with two separate approaches. If a kind 1 wants greater redundancy or media access methods, it should likely then default to the kind 1063 approach - basically advanced mode. Else, it can use simple mode.

However, it’s also worth research other exisiting projects and approaches more.. as we may not the approach angle correct.

It’s ambiguous today how to handle more values in tags - except for a few NIPs that define second, third and forth value - like pubkey, relay pet name (NIP 03) or hex, relay, marker (NIP-27).

Data architecture wise it should likely be split into a tags and tag values m2m table.

However does [p, PUBKEYA, PUBKEYB] become two instances of tag values both pointing to tag p - or does PUBKEYA have a relationship to PUBKEYB, and it wasn’t a normal p tag.

SQL wise, it’s hard to model correctly.

Yep. Same experience with rust. It pulls libraries out of its ass and functions that don’t exist or often dead code as it’s a few major versions behind.

The fog of certainty is real - the responses are so confident - and then you have to reply with… you make this dumb mistake, that doesn’t exist, that won’t compile, etc. Can you rewriting using functional programming instead of if else.

This is exactly the kind of thing I’m hoping we can build a NIP for subscriptions/credit management. A generic way to manage this stuff - instead load lots of bespoke private relay, private translate, etc. single UX for all.

Happy to spend more time this if your interested. It’s mostly a brain dump, but have a few people interested already.

https://gist.github.com/blakejakopovic/a0deee4c945c122a59ed2dcf442d2e2a

No idea. I find it interesting how it’s not a clear and simple as it seems.

Individual clients could make other decisions or support user choice over poster. One suggestion was to show average value instead of total amount - but again, it’s up to clients or users to choose.

Yep. Kind of forgot mobile does have a few autocomplete approaches.. not just tab. Too much coding and I never use auto complete on mobile. 🫠

I like it.