Fetching events 太久了,不知道怎么扒内容的,不像在用户端扒的(那个tab没啥cpu占用,ram也只有100MB+)。 感觉能优化一下,例如第一次使用问用户要不要备份。
我感觉这样就很好了,一般情况文字就够表达其他平台审核的内容了。真的有视频需要放bt就行了。relay上存多媒体对这种联邦式系统来说负荷太大了,借用其他中心化方案就很好
Yeah, it would be great if pegged assets could be done without 3rd party bridge. One way I think cross-chain assets done right is mint on one chain and burn on the other .
BTW, I am afraid of bridge
#[4]
就算肉鸡集合起来有攻击性不代表 黑客会共谋啊。
一般情况下,黑客的能力水平还是比较符合normal distribution的。 肉鸡就是他们各自抢夺的资源。在不共谋的情况下,要达到肉鸡有攻击性,就等于某个群体的黑客有能力 占有绝大部分的肉鸡。这样就更难了,黑客还是有国界的,很多跨国行为不会做。
我的可能性思路排序: 肉鸡不少是公家资源 >>> 某个黑客/组织有能力占用绝大部分肉鸡 >>攻击者有盈利的共谋动机 (共谋分叉后的收益 风险比能让他们心动) >>> 攻击者共谋 > 共谋后算力能达到能分叉走 20% 交易量维持1小时 的水平 > 社区里有人指出攻击链的攻击行为后,还能有20%的交易量在那条链上
https://open.spotify.com/episode/3cemC6cxTLqgFtuVdRHYgf?si=PBU7jdmCR9i_AIVR49icrw&nd=1
A very good intro podcast to zk by the developer of starknet.
ZK, zero knowledge proof is a very also tool for many applications. The most brought up topic about zk is zk rollup on ETH. Fundamentally, blockchain is really not good at any general storage or computation, that's why programs on blockchain are called smart contract. In the long term, I still don't see the system built on blockchain could or should be used to handle too much general computation. What zk rollup brings to the system is only cheap computation with minimum accessible storage (so don't expect any server software running on it).
Zk is basically proof that you know something without actually revealing the information. The basic zk validity proof roll up process to L1 is: 1. the sequencer(s) aggregates all transactions into a block 2. the prover computes the block with a special paradigm and generate the results and the validity proof of the computation. 3. the prover submits the state difference/result and the proof to L1, there will be a verifier on L1 to verify those results.(verify takes very little computational resources)
This process separates the computation from the blockchain. It's crucial since, computation will be run on all nodes on L1, not scalable at all. Now you can only verify the results, you can also bundle them to further decrease the computational requirements. As comparison to optimistic rollup, which assumes all transactions processed are valid until the fault proof process, zk roll up doesn't have to wait for the days of roll up confirmation to L1.
Zk proof is the proof of the execution of transactions, so that is blockchain agnostic, meaning it could verify another chain from one chain. One major application is BTC <-> ETH atomic swap, without zk, you'll need some oracle (program outside the chain) to know whether the BTC address you want to swap actually did follow the swap process. Now with zk proof on the BTC chain's state change, you don't have to deal with all the problems come with the oracle. And other application is storage chain verification.
The contents in the podcast are very detailed. Some interesting topic includes why computation is cheap, but storage is not; how they went from ASIC to cpu;how the states are stored; the domain specific language to program zk.
#zk #zeroknowledge #zkproof #ETH #Blockchain #scale #scaling #AtomicSwap #BTC #rollup
总结一下我的可能性思路排序: 肉鸡不少是公家资源 >>> 攻击者有盈利的共谋动机 (共谋分叉后的收益 风险比能让他们心动) >>> 攻击者共谋 > 共谋后算力能达到能分叉走 20% 交易量维持1小时 的水平 > 社区里有人指出攻击链的攻击行为后,还能有20%的交易量在那条链上
最后两点是任何pow链都有的特质,所以我一直觉得足够去中心的pow链安全得很 (包括以自己的利益为目标挖矿的矿工)。
再者,肉鸡的资源真的很辣鸡,蚊子肉。占用再多就要被服务器的原使用者发现了。
正常云服务器供应商不可能大规模被肉鸡。这也是我觉得这些“黑客”实际上都是偷公家资源,没有什么技术手段的人。
"一个黑客如果要做门罗挖矿它往往就会有大量的机器资源(一个只有几台肉鸡的黑客?),那么他控制这么多台机器又是无需付出" 我就是想说这个前提不成立啊。黑客间不共谋的话,肉鸡是互相抢夺的资源。
但我猜更大的问题是“肉鸡大都是黑客(并非同一个/同一群)控制的”就是一个很大的myth。我觉得更可能是“不少于 30%的肉鸡 是 某个政府/组织 内部的人根据自己的意愿挖的” 这样的话1. 肉鸡更分散 2.就更不可能共谋了,这些人不可能公布自己的违法行为。
https://openai.com/blog/planning-for-agi-and-beyond/
OpenAI's roadmap toward AGI. Nothing new or interesting, they will limit the accessibility of the model/system to the general public. (I am not judging it's good or bad). And the open source process will be slow.
Anyone who wants open source LLM should look elsewhere. Meanwhile Meta's new LLaMA seems to be more open. (Seems like the model structure and training tools are open, but the actual LLaMA product/model weights are not)
What's impressive is that they claim the 13 Billion LLaMA outperforms GPT-3 with 175 Billion parameters in many tasks. (Although it could ) And from the comparison sheet, the 7Billion LLaMA is not far behind. Hope to see some more updates when developers got to test them. But it certainly gives a lot of hope to democratize LLM.
https://mobile.twitter.com/ylecun/status/1629189925089296386
#AI #OpenAI #AGI #ModelTraining #Meta
#[2]
https://openai.com/blog/planning-for-agi-and-beyond/
OpenAI's roadmap toward AGI. Nothing new or interesting, they will limit the accessibility of the model/system to the general public. (I am not judging it's good or bad). And the open source process will be slow.
Anyone who wants open source LLM should look elsewhere. Meanwhile Meta's new LLaMA seems to be more open. (Seems like the model structure and training tools are open, but the actual LLaMA product/model weights are not)
What's impressive is that they claim the 13 Billion LLaMA outperforms GPT-3 with 175 Billion parameters in many tasks. (Although it could ) And from the comparison sheet, the 7Billion LLaMA is not far behind. Hope to see some more updates when developers got to test them. But it certainly gives a lot of hope to democratize LLM.
https://mobile.twitter.com/ylecun/status/1629189925089296386
#AI #OpenAI #AGI #ModelTraining #Meta
#[2]
https://openai.com/blog/planning-for-agi-and-beyond/
OpenAI's roadmap toward AGI. Nothing new or interesting, they will limit the accessibility of the model/system to the general public. (I am not judging it's good or bad). And the open source process will be slow.
Anyone who wants open source LLM should look elsewhere. Meanwhile Meta's new LLaMA seems to be more open. (Seems like the model structure and training tools are open, but the actual LLaMA product/model weights are not)
What's impressive is that they claim the 13 Billion LLaMA outperforms GPT-3 with 175 Billion parameters in many tasks. (Although it could ) And from the comparison sheet, the 7Billion LLaMA is not far behind. Hope to see some more updates when developers got to test them. But it certainly gives a lot of hope to democratize LLM.
https://mobile.twitter.com/ylecun/status/1629189925089296386
#AI #OpenAI #AGI #ModelTraining #Meta
确实,很多我没想到的点
"Wormhole token bridge loses $321M"; "Jump Crypto Just Counter-Exploited the Wormhole Hacker for $140 Million"
Why are these hackers always left their funds un-laundered? Can't they just do an atomic swap to XMR or use tornado cash? But in the end, it's still a huge lost for them. What I learnt:
1. centralized protocols/chains dev teams has complete control over the protocol. And teams between chains can cooperate to change the protocol without notifying the public, even professional hackers can't see this coming.
Whether or not the dev teams should do it is up to debate, but it certainly raise concerns about the teams integrity.
2. But if the contract is unmodified, or your action is fast. There's nothing the dev team can do. For average user, there should be some easy auditing tools to let the user know if the contract/protocol has been changed.
3. Cross-chain is really really dangerous, so so so much hacking events. That why I am always for one giant ecosystem surrounding one chain. (Cosmos/Atom still belong to this definition)
https://blockworks.co/news/jump-crypto-wormhole-hack-recovery
https://cointelegraph.com/news/wormhole-token-bridge-loses-321m-in-largest-hack-so-far-in-2022
#ETH #Bridge #Protocol #CrossChain #Hack #CounterHack #trustless #integrity
一看到这类事情我就火滚,暴论一下:看到这么分明的恶意行为,没特殊理由还视而不见的话,一样那么该死。垃圾社会就是由这些一点点的恶组成的,不想生活变得粪坑化就从自己做起。
刚好看到twitter有人分享,下载btcoin beach wallet,可以看到地图上接受btc的business(不需要注册). 比我想象中多,而且不少在El Salvador以外
#BTC #LightningNetwork #Adaption
https://www.hpc-ai.tech/blog/colossal-ai-chatgpt
There's still hope for open source large models. The giants probably still have huge influence on these open source projects. But it's better than nothing.
On the other hand, these models are humongous, the power to inference/run them is too centralized. Neuromorphic and analog chips + better quantization (model shrinking) seems to be the only way to democratize them.
#AI #ArtificialIntelligence #Future #Prediction #Hardware #Neuromorphic #Analog
Some reference to the minimum hardware requirements:
1. GPT-NeoX 20B, with 20 billion fp16 (2bytes) parameters, requires 42GB VRAM to perform near real time inference. Simple math: 20 billion * 2 bytes = 40GB. The memory requirement is too high for a single non-high end GPU, so it will need to be split which introduced other parallelism problem.
2. Qualcomm deloyed 1 billion int8(1byte) parameters model to a Snapdragon 8 Gen 2 platform. Which is about 1GB RAM(not sure what kind) required to generate 512*512 pixel image at around 15s.
It's running on the NPU/APU to boost INT8 inference performance. Considering the low VRAM and low processor frequency, this is still quite impressive.
3. ChatGPT has 175 billion parameters, it's at about 8 times GPT-NeoX. Even after quantization (if possile), it will still consume 42 * 8 / 2 = 168GB fast memory.
All of these is just on the memory requirement side, There are so much more going on with computation and IO bottleneck. My conclusion is: without proper innovation on the AI inference hardware side, the current consumer hardware won't be able to handle LLM in real time. But smaller 1-10 billion parameters models on personal devices are still very promising, although a lot of work need to be done with the NPU and memory hierarchy architecture.
我读下来感觉他不是在反对联邦制,而是说这样的服务很难大众化,普遍化。 我觉得没啥问题, 联邦制/去中心的东西用户群终究是少数(但很必要)最重要的是最后一段:
An open source infrastructure for a centralized network now provides almost the same level of control as federated protocols, without giving up the ability to adapt. If a centralized provider with an open source infrastructure ever makes horrible changes, those that disagree have the software they need to run their own alternative instead. It may not be as beautiful as federation, but at this point it seems that it will have to do.
开源、去中心等这些技术并非和中心化的企业/组织是绝对的对立关系,成熟、有需求的开源技术会成为新的中心化技术/商业模式的底层。最好的例子是android和google还有一系列大厂。我觉得对于crypto 来说也一样,未来的giants是能运用btc/eth这些技术 作为他们底层运作模块的那一批。
