Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
legojoey17
23 days ago
|
parent
|
context
|
favorite
| on:
Speed up responses with fast mode
What's crazy is the pricing difference given that OpenAI recently reduced latency on some models with no price change -
https://x.com/OpenAIDevs/status/2018838297221726482
dcre
23 days ago
[–]
Yes, but GPT-5.2 and Codex were widely considered slower than Opus before that. They still feel very slow, at least on high. I should give medium a try more often.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: