Iām clearly the minority here š I used to be a binger. It ultimately doesnāt support my current goals. I have a lot of responsibilities to juggle, and I canāt support all of them if I give into the instinct to binge š¤·š»āāļø I time box my binges these days.
Iāve definitely been there. There are times when my brain cannot even grasp for words. Iāll be thinking spoon and can only come up with fork or knife. It just doesnāt work. Language is the piece that seems to disconnect first for me. Interesting that it seems to be the same for you.
And if it keeps getting pushed further with questions or problems it literally hurts. This has gotten worse as Iāve gotten older š« and Iāve also come to the belief that no problem is worth getting into that state. When Iām close, I just have to say no. And when youāre working and living with others who have different levels of responsibilities and are on different binge cycles, itās really important to set and respect those boundaries, IMO.
𤣠I understand it, too - I just try to stick to shorter cycles. If Iām working on something Iām like a dog with a bone⦠but Iāve learned that if I donāt let it go in the evenings or weekends, I burn out. Supporting multiple clients and having a wide range of responsibilities has changed this for me a lot, too. I have to practice letting go and setting boundaries or I end up not being successful at anything.
SO much of this resonates with me. And I can say, personally, I hope Iām striving for a greater understanding. But I often think of the symbol of a unalome, and the way it represents the āpath to transcendenceā in Buddhism - itās not a straight line. And neither is getting out of local minima. I hope, as humans, we collectively strive for more. But I can only control myself, and trying to judge others by where they appear to be on that journey is fruitless.
Your right to speak doesnāt equate to an entitlement for me to see what you said, if I choose to opt out. I think this extends to likes.
Iām not sure only zaps is for me. I might feel differently if I was getting some insane amount of likes - my experience is not the same as others. However, I think that enabling users to have more control over their experience (as long as it doesnāt force those same choices on others), is ultimately good.
I have a lot more thoughts about the pros/cons of it, and can see both sides. Iāll get to weigh those arguments when deciding for myself how to set my own configurations. But ultimately, Iām pro user choice, and this feature enables more of that.
I wish I could. Whenever I get a new phone or take my case off it seems so pretty. But the rate at which I have to replace cases tells me that my clumsiness makes that luxury not worth the cost š«
Thank you! This is awesome!
I love this setup! Are these bought or made? Would love a tutorial on that cage. I have a ton of squirrels who like to munch on my veggies
Thanks for the super interesting convo āŗļø Iām going to be thinking about a lot of your points for awhile
Yeah, itās super interesting about the RLHF! Overfit is a very real problem and adjusting weights on models this large can be kind of like a butterfly effect. I think there is a TON of value it its generalization. But Iām of the opinion that it canāt or maybe shouldnāt do all tasks for itself - to me itās just not necessarily efficient, like using a hammer on a screw. Bigger doesnāt always mean better - it will start to underperform at a certain size. TBD what that is. But let it do what it does best! Language and conceptual derivations and awesome encoding, let other models do what theyāre better suited at. Kind of like how our brains work⦠we have separate specialized areas that we delegate tasks to when necessary. Weāre building awesome components, but Iād like us to acknowledge their limitations, not to discourage the work that has been done, but to figure out the next problem that needs to be solved.
For sure - and I think that is by design⦠theyāre going to sell it to companyās who will want to specialize and limit it to be more in line with their specific use case. Right now, itās just there to kind of be whatever you tell it to be š like a giant ad.
Maybe. But the problem with many of those internal representations are just straight up lies. Itās not a truth telling machine. Itās about to produce compelling speech. Itās a bit different in spaces like engineering, where the content is so well curated. But just because it identifies a pattern, does not make it correct or intelligent. It would need some secondary mechanism for testing hypotheses, and at that point weāre moving out of the LLM space and into the reasoning space. Progress will come when these spaces come together.
Yeah, I think there is a lot of growth that will be happening in the model network architecture space. Using more precise smaller models for tasks that LLMs use to do specialized things, reasoning models (lots of good research going on here), as well as RI for experiments (but interestingly enough those require physics engines that we define, so most or the time they only can solve for that vs actual physics - this in combination with robotics and more sophisticated sensors though has a lot of promise - essentially self driving cars, and there is still a long ways to go there). Collaborative models are super cool though.
Ultimately though, Iām concerned about the data creation. I donāt want there to be a misunderstanding of how important it is that we continue to produce it.
Heās making this claim based on the modelās architecture. Not all models in the future will have this limitation, but LLMs do based on their knowledge domain limitation.
The solution to these problems is actually smaller models solving more specific tasks. Thatās his argument. The larger the model, the more abstract - its good at connecting dots, but will start to struggle with precision. Heās not saying a model canāt do this, heās saying large LLMs do some things well and some things less well and viewing it as a one stop shop for all intelligence is not reasonable. We need RI, we need more specialized tasks, and we need a lot more research in reasoning models.

