Avatar
โšก๏ธ๐ŸŒฑ๐ŸŒ™
7560e065bdfe91872a336b4b15dacd2445257f429364c10efc38e6e7d8ffc1ff
Former Youngest Person in the World!! stuartellison.com fantactico.com knostr.io zairk.com ๐Ÿชด

Terrible news according to numerous people last week who claimed high fees were a good thing and inevitable for Bitcoin.

No doubt the same people now believe low fees are a good thing.

This is why intellectual honesty is important.

You donโ€™t have to pretend *everything* is good for bitcoin all of the time.

Thanks, have a few improvements in mind, I can make it a lot more robust and safe with more redundancy & QA/security checks but it all depends on cost of API calls.

Need to test it for a while.

OK, had a fun day.

Made a service that listens for error logs and when it picks on up it then scans my webserver directories and reads the logs, then submits them to OpenAI for troubleshooting suggestions with varying temperatures then assess the responses for QA, then if it passes QA implements the fix.

My webserver is now self healingโ€ฆ maybe?

The world has never consumed more oil than right now. The Compound Anual Growth Rate (CAGR) is still 2.5% it hasnโ€™t even plateaued out.

The world has never consumed more coal than right now. CAGR is 1%

The world has never consumed more firewood than right now! CAGR 8%.

The only energy source we ever stopped using was whales and thatโ€™s because we killed them all.

But look at the earnings of those supermajors, the free cashflow from oil is simply enormous.

It dwarves tech and banking.

๐Ÿšจ Sell all your oil stock folks. ๐Ÿšจ

MSM are out there predicting the end of oil.

Meanwhileโ€ฆ

Companies worldwide ranked by earnings (trailing 12 months):

1. ๐Ÿ‡ธ๐Ÿ‡ฆ Saudi Aramco: $307b โ›ฝ๏ธ

2. ๐Ÿ‡บ๐Ÿ‡ธ Apple: $112b

3. ๐Ÿ‡บ๐Ÿ‡ธ Exxon Mobil: $86bโ›ฝ๏ธ

4. ๐Ÿ‡บ๐Ÿ‡ธ Microsoft: $85b

5. ๐Ÿ‡ณ๐Ÿ‡ด Equinor: $78bโ›ฝ๏ธ

6. ๐Ÿ‡บ๐Ÿ‡ธ Alphabet: $71b

7. ๐Ÿ‡ฌ๐Ÿ‡ง Shell: $68bโ›ฝ๏ธ

8. ๐Ÿ‡จ๐Ÿ‡ณ ICBC: $62bโ›ฝ๏ธ

9. ๐Ÿ‡ง๐Ÿ‡ท Petrobras: $57bโ›ฝ๏ธ

10. ๐Ÿ‡จ๐Ÿ‡ณ China Construction Bank: $56b

11. ๐Ÿ‡บ๐Ÿ‡ธ JPMorgan Chase: $52b

12. ๐Ÿ‡บ๐Ÿ‡ธ Chevron: $50bโ›ฝ๏ธ

13. ๐Ÿ‡ฌ๐Ÿ‡ง BP: $47bโ›ฝ๏ธ

17. ๐Ÿ‡ท๐Ÿ‡บ Gazprom: $44bโ›ฝ๏ธ

18. ๐Ÿ‡น๐Ÿ‡ท Turkish Airlines: $43b

20. ๐Ÿ‡ฐ๐Ÿ‡ท Samsung: $35b

22. ๐Ÿ‡ฉ๐Ÿ‡ช BMW: $35b

25. ๐Ÿ‡บ๐Ÿ‡ธ Pfizer: $33b

30. ๐Ÿ‡บ๐Ÿ‡ธ Meta: $27b

31. ๐Ÿ‡บ๐Ÿ‡ธ UnitedHealth: $27b

32. ๐Ÿ‡น๐Ÿ‡ผ TSMC: $25b

35. ๐Ÿ‡บ๐Ÿ‡ธ Johnson & Johnson: $24b

39. ๐Ÿ‡บ๐Ÿ‡ธ Home Depot: $24b

42. ๐Ÿ‡ฉ๐Ÿ‡ช Mercedes-Benz: $21b

44. ๐Ÿ‡ฏ๐Ÿ‡ต Toyota: $20b

46. ๐Ÿ‡ซ๐Ÿ‡ท LVMH: $19b

49. ๐Ÿ‡บ๐Ÿ‡ธ Walmart: $19b

51. ๐Ÿ‡บ๐Ÿ‡ธ Visa: $18b

What does โ€œNostr Purple userโ€ mean?

Replying to 9833f132...

nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 is it possible to associate every single private key on the nostr protocol to human fingerprints? That way even if one looses their private key and new private can be generated for the same fingerprint. It solves the bot problem as well in the sense that finger print varified accounts can have special previliges, on nostr clients.

c/c @jack

So everyone leaves their security credentials on every irl door handle?

Q: Why does the social network with the least moderation or censorship have the most wholesome and relaxed community?

Q: Why is engagement on nostr so high without an algorithm to boost engagement?

Q: Why is the only social network without a company also the only social network with money?

Iโ€™m very spent some time asking it to write code to produce complex 3D engineering geometry via CAD APIโ€™s and itโ€™s was a bit basic and laboursome before. But now it can actually do chat to CAD without much work.

The May 12th version of GPT-4 is vastly superior at writing code compared to previous version (March 14th).

I asked my โ€œteamโ€ which is built using OpenAIโ€™s API, whether monolithic models or decentralised models would benefit from being open source in mission of achieving AGI.

*****

The question of whether artificial general intelligence (AGI) is more likely to be achieved through monolithic models or a fabric of parallel specialized large language models (LLMs) is a profound one. In parallel, the question of whether such models should be open source adds an extra layer of complexity to this discussion. Both these questions have significant implications for transparency, democratization, and security in the AI field.

A monolithic AI model is a single, comprehensive model trained to understand and respond to a wide variety of tasks. In contrast, a fabric of specialized LLMs would involve multiple models, each trained on a specific task or domain. There are valid arguments for both approaches. A monolithic model could potentially lead to a unified, coherent form of AGI, whereas a system of specialized models could allow for more flexibility and adaptability. However, both approaches present substantial challenges in terms of complexity, data requirements, and computational resources.

With the approach of a fabric of parallel specialized LLMs, one could argue that the decentralization and diversification of learning tasks might foster robustness and adaptability. These models could be trained and optimized for specific domains, yielding high performance in their respective areas. The integration of these models could potentially result in a form of AGI that is highly specialized yet collectively adaptable. This approach also aligns with the idea of a more distributed, democratized AI, where different entities could contribute specialized models to the overall fabric of AGI.

On the open-source question, open-sourcing such a fabric of models could enable a broader pool of researchers and developers to contribute to the development of AGI. This could democratize the process, prevent undue concentration of power, and accelerate innovation by allowing a larger number of minds to contribute to the solution. Moreover, it could lead to better scrutiny and diversity of thought, which might help to mitigate biases and promote fairness.

However, there are critical considerations around security, misuse, and economic factors. The misuse of specialized AI models could have dire consequences, and open-sourcing could potentially make such misuse easier. Economic considerations are also important, as entities that invest substantial resources into developing these models may be reluctant to open-source them due to potential loss of competitive advantage.

Moreover, the coordination and integration of a fabric of specialized models is a non-trivial challenge, especially in an open-source context. Ensuring interoperability, managing complexity, and developing a coherent overarching framework would all be substantial challenges. Moreover, training such a diverse set of models would require vast amounts of data and computational resources, posing further barriers to entry for smaller entities.

One potential compromise could be a hybrid approach that combines elements of open-source and proprietary models. This could involve open-sourcing certain aspects of the models (e.g., architecture, training methods) while keeping the trained models themselves proprietary. This could enable broader participation and scrutiny while protecting economic interests and mitigating misuse.

Regulation could also play an important role. Regulatory frameworks could be established to require certain levels of transparency and accountability from AI developers, without necessarily making the models completely open source. This could help ensure that the development of AGI is conducted in a manner that is ethical, fair, and in the best interests of society as a whole.

In conclusion, the path to AGI, whether through monolithic models or a fabric of specialized LLMs, is fraught with challenges and unknowns. The question of whether these models should be open source adds further complexity. While open sourcing could potentially democratize the process and promote transparency and innovation, it must be balanced against considerations around security, misuse, economic factors, and the technical challenges of coordinating a fabric of models. A hybrid approach, coupled with thoughtful regulation, might offer a way forward that captures the best of both worlds. As we continue to navigate these uncharted waters, it

2023 is such an amazing time.

I asked my โ€œteamโ€ which is built using OpenAIโ€™s API, whether monolithic models or decentralised models would benefit from being open source in mission of achieving AGI.

*****

The question of whether artificial general intelligence (AGI) is more likely to be achieved through monolithic models or a fabric of parallel specialized large language models (LLMs) is a profound one. In parallel, the question of whether such models should be open source adds an extra layer of complexity to this discussion. Both these questions have significant implications for transparency, democratization, and security in the AI field.

A monolithic AI model is a single, comprehensive model trained to understand and respond to a wide variety of tasks. In contrast, a fabric of specialized LLMs would involve multiple models, each trained on a specific task or domain. There are valid arguments for both approaches. A monolithic model could potentially lead to a unified, coherent form of AGI, whereas a system of specialized models could allow for more flexibility and adaptability. However, both approaches present substantial challenges in terms of complexity, data requirements, and computational resources.

With the approach of a fabric of parallel specialized LLMs, one could argue that the decentralization and diversification of learning tasks might foster robustness and adaptability. These models could be trained and optimized for specific domains, yielding high performance in their respective areas. The integration of these models could potentially result in a form of AGI that is highly specialized yet collectively adaptable. This approach also aligns with the idea of a more distributed, democratized AI, where different entities could contribute specialized models to the overall fabric of AGI.

On the open-source question, open-sourcing such a fabric of models could enable a broader pool of researchers and developers to contribute to the development of AGI. This could democratize the process, prevent undue concentration of power, and accelerate innovation by allowing a larger number of minds to contribute to the solution. Moreover, it could lead to better scrutiny and diversity of thought, which might help to mitigate biases and promote fairness.

However, there are critical considerations around security, misuse, and economic factors. The misuse of specialized AI models could have dire consequences, and open-sourcing could potentially make such misuse easier. Economic considerations are also important, as entities that invest substantial resources into developing these models may be reluctant to open-source them due to potential loss of competitive advantage.

Moreover, the coordination and integration of a fabric of specialized models is a non-trivial challenge, especially in an open-source context. Ensuring interoperability, managing complexity, and developing a coherent overarching framework would all be substantial challenges. Moreover, training such a diverse set of models would require vast amounts of data and computational resources, posing further barriers to entry for smaller entities.

One potential compromise could be a hybrid approach that combines elements of open-source and proprietary models. This could involve open-sourcing certain aspects of the models (e.g., architecture, training methods) while keeping the trained models themselves proprietary. This could enable broader participation and scrutiny while protecting economic interests and mitigating misuse.

Regulation could also play an important role. Regulatory frameworks could be established to require certain levels of transparency and accountability from AI developers, without necessarily making the models completely open source. This could help ensure that the development of AGI is conducted in a manner that is ethical, fair, and in the best interests of society as a whole.

In conclusion, the path to AGI, whether through monolithic models or a fabric of specialized LLMs, is fraught with challenges and unknowns. The question of whether these models should be open source adds further complexity. While open sourcing could potentially democratize the process and promote transparency and innovation, it must be balanced against considerations around security, misuse, economic factors, and the technical challenges of coordinating a fabric of models. A hybrid approach, coupled with thoughtful regulation, might offer a way forward that captures the best of both worlds. As we continue to navigate these uncharted waters, it