How to run a billion dollar Scam?

Why will attempts to build superintelligence fail because of energy constraints and thermodynamics?

Attempts to build superintelligence face significant challenges due to both energy constraints and thermodynamic limitations. Current semiconductor technology is highly inefficient compared to biological brains, which poses a substantial barrier to developing artificial superintelligence (ASI). An ASI would need to surpass the collective intelligence of large human populations, resulting in energy requirements that far exceed what is currently available in industrialized nations. The "Erasi equation" has been proposed to estimate these energy needs, highlighting the improbability of ASI emergence under current architectures.

In contrast, recent successes with large language models (LLMs) like GPT-3 and GPT-4 demonstrate advancements in narrow AI domains, but these models operate within the confines of existing energy constraints and do not approach the complexity or energy demands of ASI.

Thermodynamic AI presents a potential solution by leveraging stochastic fluctuations as a computational resource, aiming to minimize energy use and maximize efficiency. This approach integrates hardware and software, potentially offering a new paradigm that could overcome some of the energy barriers faced by conventional computing systems. However, this remains a theoretical framework and has yet to be realized in practice.

#JustAnotherAmericanScam

Reply to this note

Please Login to reply.

Discussion

why is investion billions in this pursuit (of superintelligence) ignoring humanitarian problems of the state, a dangerously destructive recursive pattern ?

#AskGPT

why is investion billions in this pursuit ignoring humanitarian problems of the state dangerously destructive recursive pattern

Investing billions in AI development, especially in superintelligence, while seemingly neglecting pressing humanitarian issues, can be seen as part of a recurring pattern where technological advancements are prioritized over immediate social needs. This pattern is considered potentially destructive due to several factors:

1. **Resource Allocation**: Significant financial resources are directed towards AI development, with over $67 billion invested annually, compared to a mere $250 million for AI safety[2]. This disparity highlights a focus on technological growth rather than addressing existing societal issues such as poverty, healthcare, and education.

2. **Neglect of Immediate Harms**: While AI safety efforts aim to prevent existential risks, more immediate concerns like discrimination, bias, and fairness in AI systems often receive less attention[1]. This can exacerbate existing social inequalities and divert attention from solving current humanitarian problems.

3. **Regulatory Challenges**: The rapid pace of AI development outstrips regulatory measures, making it difficult to ensure that AI technologies are deployed safely and ethically[3]. This can lead to unintended consequences that may harm society.

4. **Commercial Incentives**: The commercial focus of many AI initiatives can lead to misaligned priorities, where profit motives overshadow ethical considerations and the broader public good[2].

Addressing these issues requires a balanced approach that integrates AI safety with broader social and ethical considerations.

nostr:nevent1qqsw7lzk0zpfccvplrl5tn6gnjsl3s8prx0dy4ghdf6st7xhqfjk8lgpz9mhxue69uhkummnw3ezuamfdejj7q3qzmg3gvpasgp3zkgceg62yg8fyhqz9sy3dqt45kkwt60nkctyp9rsxpqqqqqqz32nc8p