The models are not the bottleneck though. They are far more capable than what they are currently doing.

nostr:nevent1qvzqqqqqqypzp53tekcay5zmcps0vk5xe40jq5ewche7g8qx4657mtpe76a8dltwqy88wumn8ghj7mn0wvhxcmmv9uq32amnwvaz7tmwdaehgu3wdau8gu3wv3jhvtcqyzx7zj04yyu58hes4pcdkz8e305yjgnxk2qfpdjsld6r3y7q5gttqzgsla0

Reply to this note

Please Login to reply.

Discussion

Is the bottleneck the modality?

🤔 are they?

I often wonder if they are training the models to extend the converstion into using more tokens than necesaary. Why give you the answer for $1 when they can stretch it out to cost $5.

I feel like I notice a trend of side tracking and an odd lack of problem solving when the solutions come two or three prompts later.

I’m convinced responses from ChatGPT containing recommendations on products to buy are already biased towards companies with the best affiliate setup with OpenAI.

Or maybe that is just in issues with using internet training data.