Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's crazy is the pricing difference given that OpenAI recently reduced latency on some models with no price change - https://x.com/OpenAIDevs/status/2018838297221726482


Yes, but GPT-5.2 and Codex were widely considered slower than Opus before that. They still feel very slow, at least on high. I should give medium a try more often.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: