Wednesday Grooves #tunestr 🌶️
Wednesday Grooves #tunestr
Gm #nostriches have a rad day!
#coffeechain #grownostr
https://nostrcheck.me/media/public/nostrcheck.me_7583248217153567551689735300.webp
GM BRO
GM🌤️on midweek. Have some coffee before continue my project.
#coffeechain
https://nostrcheck.me/media/public/nostrcheck.me_2876978086719749721689736313.webp
GM🫡🐸
Daily habit. Favorite Southeast Asian coffee.
#coffeechain #everyday #americano #grownostr
https://nostrcheck.me/media/public/nostrcheck.me_2502625568257521511689741403.webp
💕💕💕
GM HOPING ALL THE BEST FOR YOU 💕💕💕
What are risks of this process “Qualcomm wants to position its processors as well-suited for A.I. but "on the edge," or on a device, instead of "in the cloud." If large language models can run on phones instead of in large data centers, it could push down the significant cost of running A.I. models, and could lead to better and faster voice assistants and other apps” ????
Qualcomm and Meta will enable the social networking company's new large language model, Llama 2, to run on Qualcomm chips on phones and PCs starting in 2024, the companies announced today. From a report:
So far, LLMs have primarily run in large server farms, on Nvidia graphics processors, due to the technology's vast needs for computational power and data, boosting Nvidia stock, which is up more than 220% this year. But the AI boom has largely missed the companies that make leading edge processors for phones and PCs, like Qualcomm. Its stock is up about 10% so far in 2023, trailing the NASDAQ's gain of 36%. The announcement on Tuesday suggests that Qualcomm wants to position its processors as well-suited for A.I. but "on the edge," or on a device, instead of "in the cloud." If large language models can run on phones instead of in large data centers, it could push down the significant cost of running A.I. models, and could lead to better and faster voice assistants and other apps.
https://www.cnbc.com/2023/07/18/meta-and-qualcomm-team-up-to-run-big-ai-models-on-phones.html
Qualcomm wants to position its processors as well-suited for A.I. but "on the edge," or on a device, instead of "in the cloud." If large language models can run on phones instead of in large data centers, it could push down the significant cost of running A.I. models, and could lead to better and faster voice assistants and other apps nostr:npub13wfgha67mdxall3gqp2hlln7tc4s03w4zqhe05v4t7fptpvnsgqs0z4fun what are top 10 best risks ?
"Wednesdays are for coffee and smiles, a perfect blend to make little ostrich's life worthwhile"
#coffeechain #plebchain #nostrich 
GM 💕🫂🐸✊
#coffeechain 
Thanks💕✊⚡️🫡


