Avatar
nomadshiba⚡
45835c36f41d979bc8129830f2f5d92562f5343d6feddd6f30aa79480730f26e
- knotzi ₿ - #ArchiveCore - 300KB blocks i make stuff (rabbit hole for other links) https://github.com/DeepDoge get your npub name https://npub.name in case you wanna send more bitcoin, i also accept silent payments: sp1qqwdknqgz7v2ph8hxjc9t2nz3frqazjkhu7c5ar5w03tn0amw3ugrsq5zmaznxjuce70l6p47t5vm25qngxnwqgk025csgr735uds0y9wsgjkuhfc

don't mind this post.

its just yt deleted my comment on a video and im just gonna copy paste it here.

---

one is still decentralized, even if the entities are internally centralized, on a broader scale they are decentralized. all trying different methods and experimenting in parallel. parallel is the keyword here, so multiple experiments can run at the same time, with different interests and ideas. and if one fails, others doesn't fail at the same time.

and issue about computers deciding things is, the goals, who decides the temperature of the thermostat? is the average of everyone's desire good enough for you?

people have different wants and needs, they have different goals.

its basically being pets/cattles (feelings, happiness) vs being a sovereign individual/lineage.

one wants to take something small and wanna apply it globally is the main issue.

system is decentralized even it has entities internally centralized. and that's the point. it can split at any level any time.

as soon as you want it all centralized, you just become invasive. its about having alternatives making multiple decisions in parallel, vs having one thing making decisions.

one can be "yes" and "no" at the same time, other has to be "yes" or "no", you cant pick both, and everyone has to follow it. classic issue of democracy.

amazon, google, appstore etc. are just information interfaces, middle man.

in a long enough time nostr will replace each one of these and more.

because nostr doesnt have to worry about network effects, its all about providing an interface or service.

i can seamlessly change between nostr clients. and i dont have to leave anything behind.

which makes it in the long run win. because many clients in parallel can experiment and try to provide the best ux.

but they dont compete for the content or the network. they all feed the same network and the user base.

you can argue when people open yt, do they really want to spend the most time on it?

is the attention is a good metric, for what people actually want? they want things that gets their attention?

nostr also helps you here, because you can locally pick what metric do you want, or nothing at all, or pick a curator. its up to you.

---

YT Video ID: mayoL3XbKwA

(i didn't post the full link, because i don't want clients to render embeds. im just logging my comment)

it has some missing stuff like rebroadcasting posts, deleting posts, and managing relays is not always stable rarely. but its still cool.

still use other clients for the missing stuff, but this is my main client now. mostly because of the ux

YakiHonne is now my main nostr client.

using it a lot more than other. great ux.

i just gave a look at it again, i still cant understand how exactly Ark stops the service from emptying the pool.

i mean in my head i tried to imagine how i would make something like Ark. like how would i be able to receive UTXOs on base layer for my VTXOs, without service being online, and how would the service stop liars.

but i have no idea what is stopping the service from creating fake VTXOs to empty the pool. i can imagine users might try to prove service is lying onchain, but then they have to share some history onchain? which makes it expensive. idk how it works. exactly. but doesnt sound right.

i mean i like cashu. and mints in cashu can also take the bitcoin and leave, but that protocol doesnt claim to be non-custodial. also in cashu you can input ecash from multiple mints to pay a single lightning invoice which is cool.

but cashu mints are kinda want too much fee for lightning payments.

thats the only downside. many lightning wallet even if they are custodial, want way too much fee i think. idk.

tbh like a year ago

i mean while mining is centralized, and core has the monopoly and just making changes on impulse anything can happen.

we need to make the space more separated more decentralized to lock things in place.

never sacrifice decentralization for anything else. because then we have nothing.

i think currently we are not in an ideal place. ideally node runners shouldn't be people who wanna support the network and relay txs.

node runners should be everyday bitcoin users. and for that node running should optimize/compress storage/bandwidth better, and we need far better ux.

our baseline for running a node shouldn't be "anybody with a couple of hundred dollars". baseline should be, anyone with an average laptop or phone that they are already using daily.

Replying to Avatar mike

It's important to remember the original argument as it evolves over time.

In war, the winner takes it all and the looser is evil personified.

The blocksize wars, were not originally against large blocks. Small blockers (devs) were against making the requirements to run a node outside the scope of an enthusiast to run at home for a couple of hundred dollars.

Zooming out further, historians accurately corrected the "Blocksize wars" name to be the "Fork wars", whereby the small blockers prevented miners from activating a soft fork to allow larger blocks. If you could permit soft forks, then you can recover lost coins like Ethereum, or, worse still, increase the 21M Coin limit.

Zooming out further, as the smaller blockers won the war, while managing to increase the blocksize from 1 - 2MB, while increasing it further to 4MB of compressed storage using SegWit.

SegWit led to Taproot which led to the ability to create Ordinals, or Bitcoin NFTs, or as many people call it Spam.

Thus the blocksize war itself, had unintended consequences of creating the current OP_RETURN war which is trying to reclaim the small blocks for transactions, which have precious little space available as it is.

In the blocksize war, we made increasing the block size an act of pure evil. I suggest it isn't. I suggest that while we ensure that it is possible for most people to be able to run their own node, increasing the blocksize within that limit is perhaps desirable if you wish to close the gap between Lightning payments and on chain transactions.

Currently there is no apparent issue, but as Bitcoin adoption progresses, there is going to be a gap between the liquidity potential of Lightning and the minimum viable transaction value for the mainchain.

I have recently become aware of the Ark protocol, which appears an interesting option, but it is yet another attempt at making transaction space available.

But the real question is should we consider the ultimate heresy of increasing the blocksize?

nostr:nevent1qvzqqqqqqypzp6pmv65w6tfhcp73404xuxcqpg24f8rf2z86f3v824td22c9ymptqqsfz0y3evdpzhwqhushn5w744magjzuy99v58w6me9eewk02q6trqqpky085

my view is, decentralization aka being able to run a node is what makes bitcoin not crypto.

bitcoin network is like a big group chat. everyone should be able to follow it and explore its history locally. that's why we need minimal entries to verify things.

we should learn to live with constraints we have. evolve around it.

i think cashu is fine, we just need a way to verify mints are not over printing. i think i have seen papers about how to prove cashu reserves before.

and of course we can always discover more methods, newer cryptography.

but i think we should treat the block size like gravity, a law of nature. and work with it.

DATUM is important.

i think we should lower it. it grows too fast 😂

decentralization is the most important thing, and should trump everything else, even privacy.

you should NEVER sacrifice decentralization.

get your nip05 npub name from https://npub.name

please ;-;

run knots, mine with datum.

decentralize mining by building your own blocks.

decentralize the nodes and the network by making running a node easier and cheaper. more fun. more user friendly. make running a full node take less space. etc.

are you looking for verification, or wanna learn something? maybe you are asking the wrong questions. maybe you need to watch it all.

maybe you need to watch and listen everything first before answering or asking people, to catch up with the mindset and knowledge.

so you can actually ask new questions.

i was talking about delaying blocks months ago. then decided it won't work. because we have a bigger problem mining centralization. and we have DATUM to solve this.

Foundry can delay its own blocks rn. and try to build on it twice before publishing it. and it would benefit them.

---

we already have mining centralization. and core seems to also wanna have node centralization.

---

tbh this is not productive at all. this thread is all over the place. not only i have to explain things (many things), but i also have to disprove things to you. its too much job.

i used to dislike Mathew, now i agree with 95% of everything he says. he has enough technical understanding, sharp and objective. he definitely understands the game theory of bitcoin.

he is one of the few bitcoiners left on yt. rest just talks about price, or just evm immigrants.

if someone is not running a node on their laptop, i question some of the things they say tbh.

mechanic is calm and explains things in his videos really well.

you expect everyone to still talk about things calmly. you still have faith in core. and think this as just some minor idea or opinion disagreement. like few months ago.

mechanic said many times that he doesn't believe those saying about core in many of his previous videos. but if you are watching his last he might have said it. like many of us.

because at this point core is not naive, core is very aware of what they have been doing slowly for the past 2,3 years. they are very open tbh.

i don't understand what you are expecting at this point.

yeah, your word choises and the way you talk about things and your points are copy paste talking points and not new, and sounds a lot like the core's misleading talking points.

everything you talk about answered, many many times by known people in the space. i myself talked about them many many times from my point of view. when you repeat them here, its normal that i question your humanity.

i will answer, but not now. because i don't wanna give my head to it atm. its excusting explaining simple things, over and over again.

here you can check these people until i give you my own take on bitcoin. hope you see the truth, and the lies.

https://youtube.com/@bitcoinmechanic

https://youtube.com/@Bitcoin_University

Replying to Avatar SatsAndSports

If the Knots community is serious about discouraging large op_retums ('lops'), their nodes should implement what I call a "treacle-fork". It's not a hard-fork, but it puts maximum pressure on miners who include these 'lops' to encourage them to stop mining lops.

When a Knots node sees a mined block that includes any lops, the node should NOT (immediately) forward the block to other nodes. It should pretend that it didn't see the block.

I know you might think this will cause a hard fork and split the chain. However, to avoid a hard fork, I would allow that whenever a 'lop-free' block is found, Knots will accept and forward that block *and it will also (retrospectively) accept+propagate all the ancestor blocks*, even if those ancestor blocks contain lops.

With this system, Knots still follows the existing consensus rules, but miners will see a propagation delay if they mine lops. Miners who include only small op_returns will see their blocks propagate more quickly and therefore will be rewarding by winning the race more often.

Miners who include lops will see their confirmations stuck temporarily, as if they are stuck in treacle.

As a contingency to avoid falling too far behind, if there are three consecutive blocks containing lops, perhaps Knots could just give up

I should close by saying that I'm not personally strongly against large op_returns, I just find it interesting to think about the best way for nodes to apply pressure against transactions they don't like. It's a fun mental exercise!

knots isn't just about OP_RETURN.

and delaying blocks wouldn't work, because we have mining centralization.

i used to believe that kind of solution as well. i was thinking about making a plugin for it on my own node implementation.

BUT mining centralization is the main problem atm.

it causes all kinds of issues. once we get rid of mining centralization, then you can think about those kinds of solutions.

delaying blocks would benefit pools like Foundry, because they can build on their own blocks faster than rest of the network sometimes. because of the centralization.

mining centralization breaks everything.

many things relies on mining being decentralized.

nah they just add like little 20 second parts. probably to fill some quota.

you can just remove them or edit colors.

nothing changes, doesn't add anything to story.

i swear when the ai gets better, im gonna make it edit out the propaganda from movies and series. propaganda trying to turn people into multiple kinds of sexual products in the market for demons.

its more about blob data than image formats. and they build an ecosystem.

the same way they decode the data. you can also detect them. if your ecosystem is unreliable and constantly being patched against, it will not grow. i think there is also a discussion on knots about using plugins for filters. so people can write scripts to filter stuff and dont have to know C to filter things.

i also work on making my own node/client implementation, trying to solve things i mentioned above. but that's outside of this topic. and its too early to talk about it.

OP_RETURN goes to scriptPubKey. instead of sending it to an spendable script or address type like OP_1, OP_2 etc.

everywhere you can put data can be a data storage. for example i can store data in my browser history just by using page titles. does it make sense? no. but i can do it. best way to block data storage is wait for it. wait for a method to show up. then bam patch it. don't let it get big, or mainstream. kill it on the spot. this is what bitcoin core has been doing until 2 years ago. after taproot nothing got patched. and now they are removing older patches.

as i said input spending an output, just gives a pointer to it. and if that output not spent yet, you can spend it. based on implementation, it can also just be a boolean on the output.

if you are a miner and you wanna prune older blocks and just keep a list of UTXOs, you can also prune spendable outputs. like OP_FALSE OP_IF that stores data. because you know it can't be spent, so no point to store it. BUT somebody has to store it.

the whole point of a blockchain is proving something happened globally, with timestamp and order. its not designed to store blob data, and its not best at storing blob data. normally you would store the blob data somewhere else, like blossom, then take the hash and put it on the blockchain. and now you can verify it. you timestamped it.

you wouldn't use blockchain itself as blob storage in any other context. because it doesn't make sense. you would store it on a. blob storage, and put the hash on the blockchain to prove it, log it.

but now since THEY CAN store it on the blockchain, and forget about it, they are just doing that.

if a data is lost you can't rebuild the blockchain. block hash changes. somebody has to store the full blockchain.

whole point of bitcoin is decentralization, and if you make running a node harder, then you make the network less decentralized.

that's what makes bitcoin special, its the biggest, yet i can run it on my laptop.

in an ideal world most of the people would run their own node.

but nobody works to make running a node easier for normal people. im talking in terms of software, and the way blockchain stored. nobody trying to think "how can we make this consume less storage so more people can run it".

instead they keep adding more stuff to it. make it little by little harder to run. if you ran a node recently, you might have noticed it takes longer to sync blocks from the last two years. its nuts.

---

i kinda vent here but you can also watch mechanics videos on youtube, instead of listening random people like me. before asking questions to random people who might not be well informed, i suggest you watch mechanic first.

https://youtube.com/@bitcoinmechanic/videos

maybe on new phones after we store blockchain in a better way and compress some parts.

stability is not more important than decentralization

coding is not a job. coding is coding. code doesn't need business.

btw even if i change the "word", it says the same thing

99% of people cant even imagine the worse kinds of torture that is being done everyday to many humans

i was gonna star the repo, but they were using the main branch, instead of master.

uhh ok my old pc running my bitcoin node ran out of space.

deleted some stuff, now its back up.

only freed 5gb, idk how much longer it can go maybe a few weeks.

so gonna free more space later somehow...

Replying to Avatar OceanSlim

Filters don't matter. Economics does.

https://mempool.happytavern.co/tx/58ae7a318f19c580b14d3547d6c65d1a417bbe8980189d34c61b2d4161741dfb

nostr:npub1zsyt45zfh2u28zuhvp66lljp8s6jrwlw7ckvfn34255e7jt37t9qx0xwsm but muh filters

This was not out of band. I sent from my node directly to the network. I didn't "pay extra" I simply paid more than anyone else was paying at the time it was confirmed.

Dust tx in the OP_RETURN is also filtered by like almost all nodes. yet it was still confirmed at 1s/vb...

I HAVE NO INTEREST IN PUTTING DATA ON BITCOIN. I MAKE THIS TRANSACTION AS AN EXAMPLE ONLY. My reasons for removing filters is not to be able to put more data on chain, because as demonstrated, I can already do that. I have no interest in inscriptions. I have no interest in all the motives detractors will try to apply to me. My motivation is to remove paternal aspects from Bitcoins codebase and make it more maintainable. Filter exists to ensure users don't make unintended transactions, not to stop consensus transactions from getting confirmed. In that sense, things like the dust filter are probably good. And again, if we can fix witness stuffing, I'm all for it.

while thinking short term some economic decisions might sounds better, while they are bad long term for everyone. we already have many examples of this, not just in bitcoin. this is just one more example. soft forks are still forks, changing bitcoin, changing the underlying field everybody plays on. and many of these made possible and "economically viable by soft forks".

if you showed some computer case to someone from middle-ages, they would melt it for metal. sometimes people can't understand if what they do is the best economic decision. and we have one bitcoin. we can't afford experimenting with soft forks, changing consensus. but we can afford experimenting with 100 different node implementations with different opinions. after all, nodes are a software talking in behalf of the bitcoin users. and like in any free speech society, people can decide to share information or not, or when. filters matter because they work. after all, not all rules in a society are laws. not all rules in bitcoin network are consensus.

thank you for replying.

i was talking about logical compression, for example there are too much repeating data on the blockchain. many things are pointed to each-other by hash (eg. txid), but you dont have to store all of the hash, you can give internal ids to things which are shorter. and while indexing by hash you dont need the full hash, you just need the shortest unique prefix of it. and since while searching by hash you give the full hash, you can re-validate it. so you dont have to store the full hash, and repeat it inside outputs. also you can give scriptPubKey(s) internal ids as well, so you dont repeat them if they are repeating on the blockchain.

also there are many compression methods that are designed to be fast. not to mention you can cache data for block ranges being accessed frequently. i can only compress the older blocks. and since compression is done on older blocks only, i dont have to worry about handling blockchain fork while giving internal ids. and many other compression techniques i cant think of atm.

if you don't add pruning, and have indexing enabled always, it takes less space than its being optional and enabled. since you don't have to repeat the data on the blockchain and the index separately in many cases.

also, im talking about a node designed for the end user, used by them only. so i can sacrifice some cpu time for, disk space. i think disk space is the biggest issue while running a full node as a normal person. occupied disk space is permanent. and bothers people more.

idk maybe im wrong, i will see when i try to run it with everything else i didnt mention here. worse case i learn stuff. but i will try to push for smaller blockchain as much as i can.

many people will be against this but.

avoid wheat based food and sugar.

eat rice, meat, chicken, fish, eggs

you will feel better.