Hacker Newsnew | past | comments | ask | show | jobs | submit | ukuina's commentslogin

This is a big deal, the highlight is Chrome autobrowse. Goes head-to-head with OpenAI Atlas.

“Head to head” doing a lot of heavy lifting. How much market share does Atlas have I wonder.

I actually think it has already been abandoned after very little adoption.

dont spread yourself too thin;or else river will dry; pick your battles.

Do the top sellers from the past year work on Linux?

I've been meaning to set up Bazzite on an older desktop.


Basically all games work, except some multiplayer games with kernel anticheat. You can look up the status of games here:

https://www.protondb.com/

And specifically the state of multiplayer games with anticheat here (which is a much less favorable % of working games):

https://areweanticheatyet.com/

I personally wouldn't install any kernel anticheat on a computer that I intend to use for anything important, so I would personally refuse to install the incompatible games even if I was using windows.


Take ProtonDB with a grain of salt, Apex Legends still has a Silver rating ("Runs with minor issues") despite being 100% unplayable on Linux for over a year now.

"Just trust us, bro! Our security is better than the banks, governments, and major services and we would never let anyone exploit or abuse the gaping hole we're deliberately installing in your security profile! It's just our perfectly secure rootkit that won't ever be used for anything bad!"

It's so weird to me that people just allow this, or even defend it. Game companies should be legally obligated to scale human moderation and curation of multiplayer games, and if you're paying for service that gets moderated and curated, there should be some legal expectation of process - a requirement that the service provider lay out a specific "due process" framework, even if it ends up mediated, that gives a customer legal recourse. Instead, they try to automate everything, which has notoriously indiscriminate collateral damage with no recourse.

If you pour significant chunk of your private time and money into a game, you should be entitled to not arbitrarily lose an account or gameplay progress because some poorly configured naive Bayes classifier decided you did something wrong, without corresponding evidence or recourse to undo bad bans.

For some reason companies are entitled to infinitely expand their reach without concurrently expanding their responsibilities in providing service to individuals. Must be nice.


From Steam’s 2025 top X charts (https://store.steampowered.com/charts/bestofyear/2025?tab=3)

11/12 top selling new releases (the exception is battlefield 6, because the anticheat blocks Linux)

9/12 top selling (COD, BF6 and Apex block Linux)

11/12 most played (Apex blocks Linux)

So if you’re into competitive ranked games (especially fps), you might face problems due to anti cheat blocks, but practically everything else works



It's natural to assume that subagents will scale to the next level of abstraction; as you mentioned, they do not.

The unlock here is tmux-based session management for the teammates, with two-way communication using agent inbox. It works very well.


Only until Agent Commerce Protocol is more standardized: https://www.agenticcommerce.dev

I don't see why this thing has much reason to be focused on agents or AI - a standardized API for ecommerce is useful regardless of that usecase.

Single-idea implementations ("one-trick ponies") will die off, and composites that are harder to disassemble will be worth more.

Aren't many of those tests? Why define them as linters?

Great question, I let Claude help answer this...see below:

The key differences are:

  1. Static vs Runtime Analysis                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
  Linters use AST parsing to analyze code structure without executing it. Tests verify actual runtime behavior. Example from our datetime_linter:                                                                                                                                                                
                                                                                                                                                                                                                                                                                                                 
  tree = ast.parse(file_path.read_text())                                                                                                                                                                                                                                                                        
  for node in ast.walk(tree):                                                                                                                                                                                                                                                                                    
      if isinstance(node, ast.Import):                                                                                                                                                                                                                                                                           
          if alias.name == "datetime":                                                                                                                                                                                                                                                                           
              # Violation: should use pendulum                                                                                                                                                                                                                                                                   
                                                                                                                                                                                                                                                                                                                 
  This catches import datetime syntactically. A test would need to actually execute code and observe wrong datetime behavior.                                                                                                                                                                                    
                                                                                                                                                                                                                                                                                                                 
  2. Feedback Loop Speed                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     
  - Linters: Run in pre-commit hooks. Agent writes code → instant feedback → fix → iterate in seconds                                                                                                                                                                                                            
  - Tests: Run in CI. Commit → push → wait minutes/hours → fix in next session                                                                                                                                                                                                                                   
                                                                                                                                                                                                                                                                                                                 
  For AI agents, this is critical. A linter that blocks commit keeps them on track immediately rather than discovering violations after a test run.                                                                                                                                                              
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
  3. Structural Violations
  For example, our `fastapi_security_linter` catches things like "route missing TenantRouter decorator". These are structural violations - "you forgot to add X" - not "X doesn't work correctly." Tests verify the behavior of X when it exists.

  4. Coverage Exhaustiveness                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            
  Linters scan all code paths structurally. Tests only cover scenarios you explicitly write. Our org_scope_linter catches every unscoped platform query across the entire codebase in one pass. Testing that would require writing a test for each query.                                                        
                                                                                                                                                                                                                                                                                                                 
  5. The Hybrid Value                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          
  We actually have both. The linter catches "you forgot the security decorator" instantly. The test (test_fastapi_authorization.py) verifies "the security decorator actually blocks unauthorized users at runtime." Different failure modes, complementary protections.                                         
                                                                                                                                                                                                                                                                                                                 
  Think of it like: linters are compile-time checks, tests are runtime checks. TypeScript catches string + number at compile time; you don't write a test for that.

Doesn't it lose prompts prior to the latest compaction?


I’ve sent Claude back to look at the transcript file from before compaction. It was pretty bad at it but did eventually recover the prompt and solution from the jsonl file.


It’s loses them in the current context (say 200k tokens), not in its SQLite history db (limited by your local storage).


I did not know it was SQLite, thx for noting. That gives the idea to make an MCP server or Skill or classical script which can slurp those and make a PROMPTS.md or answer other questions via SQL. Will try that this week.


It doesn't lose the prompt but slowly drains out of context. Use the PreCompact hook to write a summary.

> this time with *foresight*.

So much this! Nondeterministism in LLMs does not mean you simply "reroll" your prompt and hope for the best; the incremental reprompting is what helps.


Or you just code it yourself.


It's because the velocity difference is no longer 25% between the slower and faster programmers.


I think 25% is a low estimate. Using a proper programming editor alone could realistically offer 2x or more productivity over a basic text editor, and there have definitely been programmers who stayed with basic editors.

And I have first hand seen programming teams where there was clearly more than a 25% difference — some could code much, and some could barely code at all.

I think it would be quite fair to say that, between tools and individual skill, there could easily be a 5x speed difference between slower and faster programmers, maybe more. Granted, LLMs are even faster, but I don’t think a 5x potential speed up was a slouch.


What I've seen is that the productive developers are the ones who understand what problem they are solving. They either take the time to think it through or just have an seemingly uncanny ability to see right to the heart of the problem. No false starts, no playing with different implementations. They write the code, it's efficient, and it works.

The slow developers have false starts, have to rework their code as they discover edge cases they didn't think about before, or get all the way through and realize they've solved the wrong problem altogether, or it's too slow at production scale.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: