Replying to Avatar Kis Sean

https://www.youtube.com/watch?v=outcGtbnMuQ&t=3276s&ab_channel=OpenAI

GPT-4 demo, the biggest improvement imo is:

1. support image input; this is the huge part; cause users can draw something, and then ask GPT-4 to generate code or something else based on that.

2. larger memory, from 4k tokens to 32k tokens; it's crucial for analyzing long tax documents and codes. Seems like image input depends on that too.

3. A bit smarter than ChatGPT? Hard to say how much without direct comparison.

GPT-4演示,我认为最大的改进是:

1. 支持图像输入;这是很大的部分;因为用户可以画一些东西,然后让GPT-4根据那个生成代码或其他东西。

2. 更大的内存/记忆,从4k tokens增加到32k tokens。对模型分析长篇的 税务条文,代码等很重要,似乎图像输入也是依靠这个

3. 比ChatGPT稍微聪明一点?没有直接比较很难说有多少。

#AI #openAI #ChatGPT #gpt4 #future

https://twitter.com/JordiRib1/status/1635694953463705600?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1635694953463705600%7Ctwgr%5Ed977989d6f29b9782ee2377e90a5675e720ba608%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.theverge.com%2F2023%2F3%2F14%2F23639928%2Fmicrosoft-bing-chatbot-ai-gpt-4-llm

Prometheus == GPT-4, the hype is gone 😅

Reply to this note

Please Login to reply.

Discussion

Even if Bing's AI is fundamentally GPT-4, it must got nerfed a lot. It's dummer than ChatGPT aldreay in many aspects, for example it doesn't understand "tanslate what I say later" and will directly translate the sentence. And in the recent days, it just stop working. It will search the web for what I want to translate, and freeze.

I suspect it has something to do with the RLHF part, or did they made the inference time per token shorter?

即使必应的AI基本上是GPT-4,它也被弱化了很多的版本。在许多方面它比ChatGPT还要笨太多,例如它不理解“稍后翻译我说的话”,而会直接翻译这句话。而且在最近几天,它更智障了。它会搜索我想要翻译的内容,然后就卡住了。

我怀疑这与RLHF部分有关,或者他们缩短了每个词的推理时间?