Hey, #[2]. I’ve been working on this and found that chatbots like Talk2Satoshi (text retrieval) are less flexible than asking ChatGPT with a good prompt and a temperature set to 0. Could also include paraphrased/elaborated explanations (not citations) for different levels of Bitcoin knowledge, with more instruction training, etc. Happy to chat.
#[0] just asked me to play some upbeat guilty pleasure music. This is what I came up with. #tunestr
https://music.apple.com/us/album/jump-around/1604628159?i=1604628161
🙇🏻♀️
🦁⚡️🧙🏻
Happy Mother’s Day, #[0], light of my life.
#fullCringeMode
https://music.apple.com/us/album/the-first-time-ever-i-saw-your-face/355178034?i=355178117
You are my heart and world. There’s nothing like living life by your side. ❤️
We make lungos so it might be double a proper espresso. But who’s counting? 🫣❤️
Trial and error is Reinforcement Learning. Machine Learning includes Supervised Learning, Unsupervised Learning, Self-Supervised Learning and Reinforcement Learning (these are types of problems). And any of these can be solved using Deep Learning or other methods.
They’re all subsets of AI. I think Deep Learning has been the “cool” and popular one for many, many years. Since the “Attention is All You Need revolution”, there’s been an apparent need to differentiate. Transformers are DL, but they work much better than the now classic CNNs, RNNs, etc. Transformers is probably too overwhelming a term to be the popular label, so AI/ML it is.
Transformers are 🔥. Muti-head attention and RL from human feedback and self-supervised learning in general. IYKYK, and if you know, let’s talk.





