https://twitter.com/elonmusk/status/1630394434847227909
Starlink
"Starlink v2 mini deployed in orbit"
Not sure if this version will enable direct smartphone satellite internet. But the overall upgrades are still very nice.
-better phase array antennas and band for about 4x more capacity per satellite
-new argon Hall thrusters for on orbit maneuvering, about 2.4x thrust to the previous generation (170mN thrust)
Hope the full v2 experience will be arriving soon. Direct smartphone connection is a very important way of expanding the suburban area for better living quality. Infrastructures are required to build a 30~40 minutes range suburb. And among all, internet is the hardest one. Sustainable electricity and water are not that difficult with a community size over 200 houses. (closer to natural water source is preferrable) So the only thing left is internet. For a more populated area, solutions like cellular towers that's connected to the satellite instead of the fiber are viable. But suburb areas are mainly uncovered area, so no connection during commute or field work is still a huge deal. Starlink v2 seems to be the key toward this future.
"Starlink v2 迷你版已经在轨道上部署"
不确定这个版本是否支持手机直接连接卫星互联网,但总体升级仍然非常出色。
-更好的相位阵列天线和带宽,每颗卫星的容量增加了约4倍
-新的氩气霍尔推进器可用于轨道机动(低轨道需要定期推动,不然会掉下去),相对于上一代推进器的推力增加了约2.4倍(推力为170mN)
希望完整的v2体验能够很快到来。我认为手机直连卫星,是扩大郊区生活圈 和 提高生活质量的一个重要因素。建立一个30~40分钟范围的郊区生活圈需要各种基础设施,而其中最难的是互联网。对于拥有超过200户的社区来说,可持续的电力和水的问题并不难解决。(靠近自然水源更好)所以剩下的唯一问题就是互联网。对于人口更多的地区,像将手机基站连接到卫星而不是光纤的解决方案是可行的。但郊区面积相当大,在没有网络覆盖的范围通勤 和 做什么其他维修工作是不可行的,通常会用更专门的卫星通讯设备。Starlink v2似乎是通向这个未来的关键。
#spaceX #space #satellite #satelliteInternet #InternetCoverage #Connectivity #Smartphone #suburbanization #
"ChatGPT for Robotics"
Is this the future of low code/no code? It seems greatly reduced the barrier between user and robotic controls in the real world. The world map, item location and robot movement are pre-defined/pre-implemented high-level APIs, especially movements between different robots are unified as much as possible. So, the AI will handle the part that convert human language of the desire set of action of the robot into codes to interact with those APIs. This is a huge amount of work and communication for human, so it will certainly make robotic more accessible for average user.
I think this will make robotics thrive, instead of robotic workers being replaced. Kinda like how Uber/Doordash makes delivery/paid ride more accessible to the drivers and customers. Maybe there will be a lot of human workers instructing and monitoring robots doing other jobs.
这是低代码/无代码的未来吗?它似乎极大地降低了用户与现实世界中机器人控制之间的障碍。世界地图、物品位置和机器人移动/动作都是预定义/预实现的高层更抽象的API,不同机器人之间的移动/动作API要尽可能统一。AI会生成将 人类语言 转换为 与这些API交互 的代码。这活人类来做的话,需要巨大的工作量,沟通和各种损耗,因此这种AI系统肯定会使普通用户更容易做到控制机器人。
我认为造成的结果更倾向是机器人行业得到发展,而不是机器人替代工人。有点像Uber / Doordash如何使司机和客户更容易获取送货/付费乘车服务。未来或许会有许多人的工作是,控制和监视其他机器人工作。
#AI #robotics #future #job #career #SharingEconomy
Finally got to try bing AI chatbot, my overall experience is not impressed. Most of my search can be done in less than 5~10 seconds, 20 seconds top. If I am using voice assistant the whole experiences is even more seamless than typing. Bing still need around 10 seconds to search/crawl infos online, and another 10+ seconds to inference. (or it's a purposed latency to offload the system, but the user experience is slow)
With 20 seconds, if it's some general/popular topic, I can definitely find a proper introduction article and skim through the summary. Some easy optimizations/debuggings can be done, for example, when I ask to translate a paragraph, it would search the whole thing online first. Or when I said "translate what I said earlier", it would search "translate what I said earlier" and translate that sentence, or doesn't reply at all. Seems not even as smart(dumb?) as ChatGPT.
Maybe it's good for some questions which the information that's too scattered online, or the wording/keyword is difficult for current search engine. For example, "python code to concatenate two strings and remove all spaces in between."
Overall, I wouldn't expect anything useful to me until the chatbot system (not just the AI model) can figure out when it can answer my question without searching online first. And it doesn't need to scraw the whole world wide web when I just need to know the weather. Google and amazon probably has tons of API for some easy questions like this, AI chatbot should be able to leverage those. Or implement the whole system the other way around, only let AI chatbot handles the questions that current voice assistant can't handle.
终于有机会试试Bing AI聊天机器人,我的整体体验并不是很满意。我大部分的搜索都可以在5~10秒内完成,最多20秒。如果我用语音助手的话,整个体验就更无缝了。Bing 还需要大约10秒来搜索/爬取网上的信息,再加上10多秒来边跑神经网络推理边显示结果。(或者这是为了减轻系统负载而故意延迟,但用户体验就是很慢)
用20秒的时间,如果是一些普通/流行的话题,我肯定能找到一篇合适的介绍文章,并足够时间浏览一下摘要。不少简单的优化/debug可以做,比如当我要求翻译一段话时,它会先在网上搜索整个内容。或者当我说“翻译我之前说的”,它会搜索“翻译我之前说的”并翻译那句话,或者根本不回答。看起来甚至没有ChatGPT那么聪明(笨?)。
也许它对一些网上信息太分散,或者用词/关键词对当前搜索引擎来说很难的问题有用。例如, “python code to concatenate two strings and remove all spaces in between”
总的来说,除非聊天机器人系统(不仅仅是AI模型)能够明白哪些情况可以不搜索就能直接回答我的问题,否则我不指望它对我有什么用。让它回答天气,赛事等简短的问题的时候,它也没必要把整个互联网扒下来才能找到答案。Google和Amazon的AI bot能这么快回答一些简单的问题,应该是已经有很对对应的API了,AI chatbot想变得更有用,必须要应用上那些。或从另一个方向搞,只让现有的语音助手不能直接回答的时候才调用LLM的AI聊天机器人。
#AI #OpenAI #ChatGpt #Bing #Microsoft #Chatbot #Assistant
https://dune.com/browse/dashboards
an awesome tool to view on chain data.
#blockchain #data #analysis #transparency
https://twitter.com/BitcoinMagazine/status/1629834253503717376
10 sats for bananas? Is this a demo? Too cheap to be true.
#btc #LightningNetwork #adaption #payment
我觉得el salvador还好,具体要看最近这几年他们整顿犯罪的力度。
“The programme which appears to be for residency initially rather than direct citizenship by investment looks likely to be set at $100,000 rather than the 3 Bitcoin that was initially rumoured. The new Volcano Bonds set to be offered next year will be a qualifying investment. Other investment types may include real estate and business investment. ”
https://www.goldenvisas.com/el-salvador
可以买Volcano Bonds 或 房产的话,中文世界还是挺多人有这个闲钱的。钱怎么转出来才是问题。
还要DM 和 “这批满员”,最好的情况估计是什么组团投资,最差的话什么情况都有。我感觉挺坑的
Eigenlayer propose a solution to the fractured trust on ETH's layer 2/ sidechains. Problem: In order to utilized the full trust of L1, L2s and sidechains need to deploy/prove (rollup) their apps on the EVM(Ethereum Virtual Machine). But that's not generally possible, such as data availability layers, oracles, new VMs, bridges and trusted execution environments. To build trust around those parts introduces new problems like: it's hard to build new decentralized trust networks; even it's built, the trust level will not be the same as L1/ETH/BTC; the new trust network will split away the value on the L1.
The idea of EigenLayer is to introduce a new layer between ETH and those problem modules. EigenLayer makes each one of these modules into a containers. ETH owner can stake their eth on one of these container to provide security and trust to it. In return, the container will payback in fees to those stakers. Basic structure as below:
Eth owner -- ETH main-net -----(restake to one of the container)---- EigenLayer ---(fees back to staker)--- Container(Modules needs trust/secruity)
View this from another aspect, it's like a open investment market for stakers to put their stakes on. I think this is very important, besides oracles, data availability is also a huge problem to a blockchain system without the involvement of another blockchain. And I have a lot of doubts for the storage solution like Arweave or Filecoin.
Although there are solutions like using zk proof of another chain's state to built trust towards that chain without any 3rd party involvement. But you still have to rely on the chain's own security/trust. To get everything done under the security of one chain is necessary for scaling up. (Targeting the problems EigenLayer propose to solve.) This also fits how I vision the decentralized future that only 1~3 main chains will dominant over 99% of the networks and traffics.
Honestly this sounds like a proof of stake and more modules version of co-mining to me, share some security resources of the main chain to the sidechain. It just makes sense.
Eigenlayer提出了解决ETH L2/侧链上分散的信任/安全问题的解决方案。现有的问题:为了充分利用L1的信任/安全性,L2和侧链需要在以太坊虚拟机(EVM)上部署/证明(rollup)其应用程序。但是这通常是不可能的,例如数据可用性(测试数据是不是真的安全地存放在L2上)、oracle、新的VM、桥 和 新的受信任的执行环境。在这些部分建立信任会引入新问题(最简单就是要发一条新链,不一定要新token),例如:很难建立新的去中心化信任网络;即使建立了,信任级别也远远赶不上L1/ETH/BTC;新的去中心信任网络将使L1上的价值/信任分裂。EigenLayer是在ETH和这些问题/模块之间引入一个新层。EigenLayer将每一个问题的模块都变成一个容器。ETH所有者可以在这些容器之一上押注他们的ETH,以为其提供安全性和信任度。作为回报,容器将向这些押注者支付费用。基本结构如下:
ETH所有者——ETH主网——(重新押注到一个容器)——EigenLayer——(回报费用到押注者)——容器(就是需要信任/安全性的模块)
从另一方面看,这就像一个开放的投资市场,供押注者对喜欢有潜力的模块进行押注,以获得回报。我认为这非常重要,除了预言机之外,数据可用性对于一个没有利用到其他区块链的 区块链系统 是一个巨大的问题。毕竟对于像Arweave或Filecoin这样的存储解决方案,我有很多疑虑。
虽然有其他解决方案,例如使用另一个链的zk证明状态来建立对该链的信任,而不需要任何第三方参与。但是仍然需要依赖该链的自身安全性/信任度。想要有效扩容,大部分的活都需要在一条链所形成的系统下解决。(其实就是EigenLayer提出要解决的那些问题)。这也符合我对去中心化未来的看法,即只有1~3条主链会占据99%的网络和流量。
老实说,这对我来说听起来像是一个 Proof of stake 和更模块化版的co-mining,共享主链的安全资源到侧链。相当符合主链扩容的直觉思路。
#EigenLayer #ETH #Layer2 #sidechain #coMining
I think jobs will be reduced by AI automation instead of direct replacement. Let's analyze how this unstoppable trend will go in the coming decade.
Firstly, let's see what jobs are being reduced/replaced. After the pandemic, tons of phone answering, front desk, registration jobs are being automated. So it's easy to see that repetitive office/front jobs, or assistant jobs will be in the frontline to be greatly reduced. This may included many low end coder jobs. Another trend that is going is on request content creator jobs, AI arts which generated in seconds are getting more and more comparable to works that would take human artists days to complete.
Then we could see that, from a more general aspect, jobs that require a lot of human interaction and creativity will be very hard/impossible to be replaced. Due to the aging problem around the world, nursing and healthcare will the most sustainable jobs in the coming decade. Likewise, teachers, trainers and consultants will be hard to be replaced. Artist is a tricky topic, but it's easy to say that arts that abstract thoughts and ideas will always have a group of rich audiences. (Also for money laundry)
In addition, I think there are some jobs that will be tougher to be replaced than many people would anticipate. Number one is driver, self-driving tech is still very far away, not to mention it takes time for adaption and legislation. Factory assemble line jobs, especially for smartphone or other electric devices that is hard to be automated. No robotic hands/arms will be as cheap and as reliable as human.
我认为人工智能自动化将减少工作机会,而不是直接替代。让我们分析一下未来十年这种不可阻挡的趋势会如何发展。
首先,让我们看看哪些工作将被减少/替代。在疫情之后,大量的电话接听、前台、登记工作正在被自动化。因此很容易看出,重复的办公/前台工作或助理工作将是大幅减少的前线。这可能包括许多低端码农工作。另一个趋势是,请求低端创作者工作(即接委托,按需产出的创作者),几秒钟内生成的AI艺术作品越来越能与需要人类艺术家几天才能完成的作品相媲美。
然后我们可以从自动化可能性的角度来分析,需要大量人际互动和创造力的工作将非常难/不可能被替代。由于全球的老龄化问题,护理和医疗保健将是未来十年最具可持续性的工作。同样,教师、培训师和顾问将很难被替代。高端的艺术家是很难定义的,但可以抽象思想和观念的艺术作品将始终会拥有一群富有的观众。(还有用于洗钱的)
此外,我认为有些工作将比许多人预期的更难被替代。首先是司机,自动驾驶技术仍然非常遥远,更不用说需要时间来适应和立法。工厂组装线的工作,特别是对于难以自动化的智能手机或其他电子设备。没有机器人手/臂能够像人类那样廉价且可靠。
#AI #automation #future #prediction #job #career
哈哈,你听多了就简单了。我有点点阅读障碍,不是默读的话一堆文字进入我大脑后会很乱。简单的没什么信息量的句子还好,技术含量高一点的文章 就要一遍一遍地读 去拆解句子成分。
所以我大部分摄取信息的方式是在听,以前真的不很喜欢网上看文字,现在会用edge和google读出来。所以是twitter这样文字一段一段分开的,阅读工具不能连起来读的网站,我看起来还是挺吃力的。
"HMD’s latest Nokia phone is designed to be repaired in minutes"
Right to repair is becoming a huge deal. Besides what the electronic wastes do to our environment, the life cycle of our phone/laptop is getting longer and longer due to the technological stagnation.
It's awesome that Nokia collaborate with iFixit for the parts and repairment tools. What I love to see about is they will unlock the bootloader when the official cycle ends. A sustainable product life cycle would be great, recyclable design, and a fully functionable/profitable recycle factory.
#RightToRepair #EnvironmentProtection #ElectronicGarbage #Nokia #Ifixit #ProductCycle
https://openai.com/blog/planning-for-agi-and-beyond/
OpenAI's roadmap toward AGI. Nothing new or interesting, they will limit the accessibility of the model/system to the general public. (I am not judging it's good or bad). And the open source process will be slow.
Anyone who wants open source LLM should look elsewhere. Meanwhile Meta's new LLaMA seems to be more open. (Seems like the model structure and training tools are open, but the actual LLaMA product/model weights are not)
What's impressive is that they claim the 13 Billion LLaMA outperforms GPT-3 with 175 Billion parameters in many tasks. (Although it could ) And from the comparison sheet, the 7Billion LLaMA is not far behind. Hope to see some more updates when developers got to test them. But it certainly gives a lot of hope to democratize LLM.
https://mobile.twitter.com/ylecun/status/1629189925089296386
#AI #OpenAI #AGI #ModelTraining #Meta
What I enjoy the most is the basil dip in the soup when I go to pho. 🤣
sha256不是quantum resistance吗? 不过ECDSA确实是问题,不知道需不需要hardfork
https://consensys.net/blog/developers/how-will-quantum-supremacy-affect-blockchain/
