Books aren't wikis or articles, but this third thing. Hence the new event.

There can be n number of editions, translations, versions of the same book. Someone interested in a particular book might publish multiple ones.

Not just the Bible, but also "Tom Sawyer" or "Faust II" or "Plato's Republic", etc.

Reply to this note

Please Login to reply.

Discussion

i think you need to create a new tag with a new defined set of fields

if they are single letter they should come up on index searches, just as i mention best to avoid a direct conflict with one that is in use so a simple tag search doesn't pull up useless stuff that overlaps too much with another differently formatted tag (though people really should be using kind + tag for this)

I think so, too. #[4] #[5]

yeah, just need to structure them from more abstract to less abstract, eg, ["t","the bible","1 john", "3"] so each subsequent tag field drills down

just keep in mind that only single letter tag prefixes are guaranteed to be indexed for searching, a client can process multi-letter tags in other ways, such as metadata

Each one of those would get a different d-tag

“bible-randomstring”

“bible-sometimestanp”

“tomsawyer-blahblahblah”

But then you couldn't easily group them by "Bible" or "Tom-Sawyer".

Feels like a loss of information. That's been bothering me the whole time. Generally feels like we're not really supplying sufficient metadata. Why should someone choose one entry over another? Can I search for entries from X author or publisher or that were published before 1950 and written in English?

A book isn't a shitpost or five paragraph article.

Feels like we're forgetting an entire feature.

And then you end up with convoluted d-strings like "Bible-RSV-1965-Foreward-by-Sheen-British-edition".

well, then have a look at how ISBN codes and all those other existing metadata conventions work and map it to the tags

Yeah, I did. Hence my comment, earlier. We could even auto-fill fields, If they supply the ISBN. Otherwise, they have to do it manually or leave them empty.

ISBN is powerful. We should at least consider including it.

It feels like we're torturing our new event kind, in order to fit it into other data models, instead of structuring it the way it would be most useful to the actual end user.

Instead of saying "modeled on Kind 0/1/30818/30whatever" we should be asking ourselves:

If this were a card out of a library catalogue, what should it have one it?

Like this. What would I need to fill out, to document the basic metadata for this journal?

Title, Volume, Edition/Number, Published on (why "at" and not "on"?), Published by

https://www.jstor.org/stable/e27116117

Ah, it's "at" instead of "on" because it's a time and not a date. I don't actually know the precise time something was originally published, though. Just something like 34 B.C. or July 10, 1543.

What is the UNIX time stamp for the Bible?

I guess the time of the publishing of that version? Okay, but then we definitely have to name the version.

for dates, designate a standard format like 1022-04-20 - this also is conveniently lexicographically sorted

I'm thinking they went with UNIX timestamp because they assumed the source would always be another note.

or they didn't think about the problem of how much public domain literature there is that overflows even negative unix int64 timestamps - it's 1970-2140 for 64 bits - 2037 for 32 bits, so i think if you allow negative ones to mean before UNIX epoch then it couldn't be more than 1800 before it flips to ~2140

nah, in fact, relays can and do reject events with funny timestamps, certainly, auth timestamps "should be" constrained to a short window of a minute or less in order to stop replay attacks

haha, that's more than 64 bits even if we allow those past dates to have negative numbers they will only give us until like 1630 or something

UNIX timestamp for Plato's Republic, assuming 01 January 392 B. C. at 12:00 am is -19374092131393201937362373274. 🤣

over 19 digits is more than 64 bits

but, interesting point, the value is a JSON integer, i don't think it allows over 64 bit unsigned anyhow

bigger numbers are generally treated using generalised "big integer" math that uses byte arrays and arbitrary lengths, and to encode them in JSON you will usually then need to put them in a string, that's also why they are strings for npubs and event ids

also, -382-01-01T12:00 is shorter