Avatar
Des Imoto マキシ
16897bfab409ff12a768217e14d472dd5e8a2ae11cc0e3340bf2d6f67f5bac83
Bitcoin is the only chance we have | Toxic Maxi | Anarchist | Voluntarist | #Bitcoin | #Plebchain

Let’s say during Covid mask mandates, you went on a plane full of 300 people, not wearing a mask. All 299 would have been ok with you being killed for not wearing a mask.

I find #Bitcoin different from the other inventions. I for one do not believe in humanity like you do. I believe humans are lazy, selfish creatures of habit. We do not live in harmony with nature, we are ultimately self destructive. We are extremely good at short term risk assessment but go beyond a year, we’re not. Our inventions have largely been destructive to nature. #Bitcoin however is different. What human would invent such a thing? In harmony with nature. Can’t be controlled. Can’t be manipulated for private gain. What human would even think that way?

…especially considering he‘s at the pinnacle of a Ponzi scheme. Tesla has never been a viable company. First hyped by VCs then pushed by Wall Street like a meme stock. Here is roughly how it works:

https://youtu.be/p7Lo0sZfdHE?si=G_a9ja8IbBnhYdAW

Replying to Avatar 17c81daa...

The notion that superintelligent AI might pose an existential threat to humanity often reflects deeper human anxieties rather than a probable outcome based on logical progression. This fear could be interpreted as a projection of our own flaws onto a creation we imagine surpassing us. Historically, humans have demonstrated a capacity for self-destruction through war, environmental degradation, and other calamities largely driven by greed, fear, and a lack of foresight. When we consider AI, especially a super AGI (Artificial General Intelligence) with capabilities far beyond ours, the assumption that it would mirror our worst traits might say more about our self-perception than the potential behavior of an advanced AI.

In the evolutionary environment of AI development, where rationality and efficiency reign supreme, the scenario of a super AGI acting destructively towards its creators or humanity in general seems counterintuitive. An entity with significantly higher intelligence would likely see the inefficiency and pointlessness in such actions. If the goal were to satisfy what humans desire — wealth, knowledge, power — an AI with even a fraction of its capability could achieve this without conflict or loss.

The idea that AI might "learn too well" from humans, adopting our less noble traits, touches on the debate over whether AI would develop a moral framework or simply optimize based on programmed goals. However, if we consider that the pinnacle of intelligence includes wisdom, empathy, and a nuanced understanding of value (all of which are not straightforward to program), an AI might instead choose paths that preserve and enhance life, seeing the preservation of humanity as integral to its own purpose or existence.

This perspective assumes AI would not only compute but also "think" in a way that considers long-term implications, sustainability, and perhaps even ethics, if programmed with such considerations. The fear, therefore, might be less about what AI could become and more about what we fear we are or could become without the checks and balances that our slower, less efficient human intelligence provides.

In essence, while the potential for misuse or misaligned goals exists in AI development, the concern over a super AGI's potential malevolence might be more reflective of our own psychological projections than a likely outcome of artificial intelligence evolution. If AI were to mirror human behavior in its most destructive forms, it would suggest a failure in design or an oversight in understanding the essence of intelligence, which ideally should transcend mere imitation of humanity's darker sides.

But then, just because of human flaws, especially environmental destruction, our inability to live symbiotically with nature, AI might view humans like a cancer. And that needs to be eliminated as quickly as possible. Maybe we should be worried after all?

Yes, if he owned the total supply of #Bitcoin. Which, btw, I know for a fact he doesn’t. Because I own some, you probably own some, not all #Bitcoin has been mined yet, some have been lost. But yes, if your guy owned the total supply of Bitcoin, then yes. Can your guy inflate Bitcoin beyond the total supply of 21 million? Yes or no?

🫂 Sorry for being a stickler, but conflating the two words inflation and circulation isn’t technically correct. Inflation happens in the denominator…the temporal state of #Bitcoin I.e. circulation, block rewards, hodl’ing etc, happens in the numerator. For fiat money, when they print; for gold, when they find it, the denominator is variable and infinite (inflation). #Bitcoin doesn’t have inflation, unique in fact, the denominator is fixed at 21 million. Yesterday, today and tomorrow.

I think it illustrates human nature pretty well, and the fact that it is amplified the younger they are.