Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> it's almost impossible to get Gemini to not do "helpful" drive-by-refactors

Just asking "Explain what this service does?" turns into

[No response for three minutes...]

+729 -522



it's also so aggressive about taking out debug log statements and in-progress code. I'll ask it to fill in a new function somewhere else and it will remove all of the half written code from the piece I'm currently working on.


I ended up adding a "NEVER REMOVE LOGGING OR DEBUGGING INFO, OPT TO ADD MORE OF IT" to my user instructions and that has _somewhat_ fixed the problem but introduced a new problem where, no matter what I'm talking to it about, it tries to add logging. Even if it's not a code problem. I've had it explain that I could setup an ESP32 with a sensor so that I could get logging from it then write me firmware for it.


"I've had it explain that I could setup an ESP32 with a sensor so that I could get logging from it then write me firmware for it." lol did you try it? This so far from everything ratinonal


If it's adding too much logging now, have you tried softening the instruction about adding more?

"NEVER REMOVE LOGGING OR DEBUGGING INFO. If unsure, bias towards introducing sensible logging."

Or just

"NEVER REMOVE LOGGING OR DEBUGGING INFO."


What. You don't have yours ask for edit approval?


The depressing truth is most I know just run all these tools in /yolo mode or equivalents.

Because your coworkers definitely are, and we're stack ranked, so it's a race (literally) to the bottom. Just send it...

(All this actually seems to do is push the burden on to their coworkers as reviewers, for what it's worth)


You're mixing up two things though. One is what the agent does "locally", wherever that might be (for me it's inside a VM), and second is what code you actually share or as you call "send".

Just because you don't want to gate every change in #1, doesn't mean you're just throwing shit via #2, I'm still reviewing my code as much as before, if not more now, before I consider it ready to be reviewed by others.

But I'm seemingly also one of the few developers who seem to take responsibility of the code I produce, even if AI happens to have coded it.


> Just because you don't want to gate every change in #1, doesn't mean you're just throwing shit via #2,

Right but in practice from what I've seen at work, it does.

You're right: it shouldn't inherently, but that's what I've been seeing.

> But I'm seemingly also one of the few developers who seem to take responsibility of the code I produce, even if AI happens to have coded it.

Pretty much what I'm getting at, yeah


There's a huge psychological difference between 1) letting the agent write whatever then editing it for commit, and 2) approving the edits. There shouldn't be, but there is.


Who has time for that? This is how I run codex: `codex --sandbox danger-full-access --dangerously-bypass-approvals-and-sandbox --search exec "$PROMPT"`, having to approve each change would effectively destroy the entire point of using an agent, at least for me.

Edit: obviously inside something so it doesn't have access to the rest of my system, but enough access to be useful.


I wouldn't even think of letting an agent work in that made. Even the best of them produce garbage code unless I keep them on a tight leash. And no, not a skill issue.

What I don't have time to do is debug obvious slop.


I ended up running codex with all the "danger" flags, but in a throw-away VM with copy-on-write access to code folders.

Built-in approval thing sounds like a good idea, but in practice it's unusable. Typical session for me was like:

  About to run "sed -n '1,100p' example.cpp", approve?
  About to run "sed -n '100,200p' example.cpp", approve?
  About to run "sed -n '200,300p' example.cpp", approve?
Could very well be a skill issue, but that was mighty annoying, and with no obvious fix (options "don't ask again for ...." were not helping).


One decent approach (which Codex implements, and some others) is to run these commands in a real-only sandbox without approval and let the model ask your approval when it wants to run outside the sandbox. An even better approach is just doing abstract interpretation over shell command proposals.

You want something like codex -a read-only -s on-failure (from memory: look up the exact flags)


I keep it on a tight leash too, not sure how that's related. What gets edited on disk is very different from what gets committed.


>Who has time for that?

People that don't put out slop, mostly.


That's another thing entirely, I still review and manually decide the exact design and architecture of the code, with more care now than before. Doesn't mean I want the UI of the agent to need manual approval of each small change it does.


Ask mode exists, I think the models work on the assumption that if you're allowing edits then of course you must want edits.


if you had to ask it obviously needs to refactor code for clarity so next person does not need to ask


"I don't know what did it, but here's what it does now"


I've seen Kimi do this a ton as well, so insufferable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: