gpt4o still sucks at generating 3d models. shame. I just asked it to make a simple low-poly star:

Was hoping its improved visual understanding would help here.

Reply to this note

Please Login to reply.

Discussion

Is it rolled out already? Thought they said over the next couple of weeks

Had to log out and back in again to get it 🤷‍♂️

First impression: It seems a lot faster than GPT-4

yeah its quick

Have you used the desktop app to try sharing screen and coding with it? Any good?

Can’t find that anywhere

Me neither!

«We are beginning to roll out GPT-4o to ChatGPT Plus and Team users»

https://openai.com/index/gpt-4o-and-more-tools-to-chatgpt-free/

Yeah found it, had to log out and back in to see it in the UI🤙

i just want ai to make models for me 😭. Its the only part i don’t want to do in gamedev

To me this is a critical step in generative AI. We ask it to reason about the world without any intuitive model of what the world is. Training should involve translations between, text, audio, images, video, 3d models, and 3d animations. That would force it to develop an intermediary understanding of the world.

yes 🤝

Why would a language model make good visuals? Diffusion models are what make good visuals.

this new model does audio, images and text all at the same time, so was hoping it would be smarter here. not sure what their architecture looks like

Surely they will eventually combine the various types of models into a single product to create a cohesive product, but maybe instead of waiting for that you should just move over to a diffusion model to make your desired imagery.

GPT is primarily a language model. OpenAI is working on diffusion models, primarily their video generation model called sora, but language models will never do imagery as good as diffusion models, so until they have something that’s combining both types of models I wouldn’t expect anything close to perfection or even “good”

its much better at generating blender scripts that generate models