Terrible news according to numerous people last week who claimed high fees were a good thing and inevitable for Bitcoin.
No doubt the same people now believe low fees are a good thing.
This is why intellectual honesty is important.
You donโt have to pretend *everything* is good for bitcoin all of the time.
Thanks, have a few improvements in mind, I can make it a lot more robust and safe with more redundancy & QA/security checks but it all depends on cost of API calls.
Need to test it for a while.
OK, had a fun day.
Made a service that listens for error logs and when it picks on up it then scans my webserver directories and reads the logs, then submits them to OpenAI for troubleshooting suggestions with varying temperatures then assess the responses for QA, then if it passes QA implements the fix.
My webserver is now self healingโฆ maybe?
Moving away from it?
Itโs retail investors and their 401K that have been moved away from oil. Thatโs all. 
The world has never consumed more oil than right now. The Compound Anual Growth Rate (CAGR) is still 2.5% it hasnโt even plateaued out.
The world has never consumed more coal than right now. CAGR is 1%
The world has never consumed more firewood than right now! CAGR 8%.
The only energy source we ever stopped using was whales and thatโs because we killed them all.
But look at the earnings of those supermajors, the free cashflow from oil is simply enormous.
It dwarves tech and banking.
MSM are out there predicting the end of oil.
Meanwhileโฆ
Companies worldwide ranked by earnings (trailing 12 months):
1. ๐ธ๐ฆ Saudi Aramco: $307b โฝ๏ธ
2. ๐บ๐ธ Apple: $112b
3. ๐บ๐ธ Exxon Mobil: $86bโฝ๏ธ
4. ๐บ๐ธ Microsoft: $85b
5. ๐ณ๐ด Equinor: $78bโฝ๏ธ
6. ๐บ๐ธ Alphabet: $71b
7. ๐ฌ๐ง Shell: $68bโฝ๏ธ
8. ๐จ๐ณ ICBC: $62bโฝ๏ธ
9. ๐ง๐ท Petrobras: $57bโฝ๏ธ
10. ๐จ๐ณ China Construction Bank: $56b
11. ๐บ๐ธ JPMorgan Chase: $52b
12. ๐บ๐ธ Chevron: $50bโฝ๏ธ
13. ๐ฌ๐ง BP: $47bโฝ๏ธ
17. ๐ท๐บ Gazprom: $44bโฝ๏ธ
18. ๐น๐ท Turkish Airlines: $43b
20. ๐ฐ๐ท Samsung: $35b
22. ๐ฉ๐ช BMW: $35b
25. ๐บ๐ธ Pfizer: $33b
30. ๐บ๐ธ Meta: $27b
31. ๐บ๐ธ UnitedHealth: $27b
32. ๐น๐ผ TSMC: $25b
35. ๐บ๐ธ Johnson & Johnson: $24b
39. ๐บ๐ธ Home Depot: $24b
42. ๐ฉ๐ช Mercedes-Benz: $21b
44. ๐ฏ๐ต Toyota: $20b
46. ๐ซ๐ท LVMH: $19b
49. ๐บ๐ธ Walmart: $19b
51. ๐บ๐ธ Visa: $18b
Biased outside the domain of US politics / current affairs?
What does โNostr Purple userโ mean?
Theyโre rolling it out this week.
nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 is it possible to associate every single private key on the nostr protocol to human fingerprints? That way even if one looses their private key and new private can be generated for the same fingerprint. It solves the bot problem as well in the sense that finger print varified accounts can have special previliges, on nostr clients.
c/c @jack
So everyone leaves their security credentials on every irl door handle?
You can tune OpenAI to your own preferences already.
Q: Why does the social network with the least moderation or censorship have the most wholesome and relaxed community?
Q: Why is engagement on nostr so high without an algorithm to boost engagement?
Q: Why is the only social network without a company also the only social network with money?
Iโm very spent some time asking it to write code to produce complex 3D engineering geometry via CAD APIโs and itโs was a bit basic and laboursome before. But now it can actually do chat to CAD without much work.
The May 12th version of GPT-4 is vastly superior at writing code compared to previous version (March 14th).
Virus design tools.
Toxin modellers.
Etc, etc.
I asked my โteamโ which is built using OpenAIโs API, whether monolithic models or decentralised models would benefit from being open source in mission of achieving AGI.
*****
The question of whether artificial general intelligence (AGI) is more likely to be achieved through monolithic models or a fabric of parallel specialized large language models (LLMs) is a profound one. In parallel, the question of whether such models should be open source adds an extra layer of complexity to this discussion. Both these questions have significant implications for transparency, democratization, and security in the AI field.
A monolithic AI model is a single, comprehensive model trained to understand and respond to a wide variety of tasks. In contrast, a fabric of specialized LLMs would involve multiple models, each trained on a specific task or domain. There are valid arguments for both approaches. A monolithic model could potentially lead to a unified, coherent form of AGI, whereas a system of specialized models could allow for more flexibility and adaptability. However, both approaches present substantial challenges in terms of complexity, data requirements, and computational resources.
With the approach of a fabric of parallel specialized LLMs, one could argue that the decentralization and diversification of learning tasks might foster robustness and adaptability. These models could be trained and optimized for specific domains, yielding high performance in their respective areas. The integration of these models could potentially result in a form of AGI that is highly specialized yet collectively adaptable. This approach also aligns with the idea of a more distributed, democratized AI, where different entities could contribute specialized models to the overall fabric of AGI.
On the open-source question, open-sourcing such a fabric of models could enable a broader pool of researchers and developers to contribute to the development of AGI. This could democratize the process, prevent undue concentration of power, and accelerate innovation by allowing a larger number of minds to contribute to the solution. Moreover, it could lead to better scrutiny and diversity of thought, which might help to mitigate biases and promote fairness.
However, there are critical considerations around security, misuse, and economic factors. The misuse of specialized AI models could have dire consequences, and open-sourcing could potentially make such misuse easier. Economic considerations are also important, as entities that invest substantial resources into developing these models may be reluctant to open-source them due to potential loss of competitive advantage.
Moreover, the coordination and integration of a fabric of specialized models is a non-trivial challenge, especially in an open-source context. Ensuring interoperability, managing complexity, and developing a coherent overarching framework would all be substantial challenges. Moreover, training such a diverse set of models would require vast amounts of data and computational resources, posing further barriers to entry for smaller entities.
One potential compromise could be a hybrid approach that combines elements of open-source and proprietary models. This could involve open-sourcing certain aspects of the models (e.g., architecture, training methods) while keeping the trained models themselves proprietary. This could enable broader participation and scrutiny while protecting economic interests and mitigating misuse.
Regulation could also play an important role. Regulatory frameworks could be established to require certain levels of transparency and accountability from AI developers, without necessarily making the models completely open source. This could help ensure that the development of AGI is conducted in a manner that is ethical, fair, and in the best interests of society as a whole.
In conclusion, the path to AGI, whether through monolithic models or a fabric of specialized LLMs, is fraught with challenges and unknowns. The question of whether these models should be open source adds further complexity. While open sourcing could potentially democratize the process and promote transparency and innovation, it must be balanced against considerations around security, misuse, economic factors, and the technical challenges of coordinating a fabric of models. A hybrid approach, coupled with thoughtful regulation, might offer a way forward that captures the best of both worlds. As we continue to navigate these uncharted waters, it

