I personally wouldn't install any kernel anticheat on a computer that I intend to use for anything important, so I would personally refuse to install the incompatible games even if I was using windows.
Take ProtonDB with a grain of salt, Apex Legends still has a Silver rating ("Runs with minor issues") despite being 100% unplayable on Linux for over a year now.
"Just trust us, bro! Our security is better than the banks, governments, and major services and we would never let anyone exploit or abuse the gaping hole we're deliberately installing in your security profile! It's just our perfectly secure rootkit that won't ever be used for anything bad!"
It's so weird to me that people just allow this, or even defend it. Game companies should be legally obligated to scale human moderation and curation of multiplayer games, and if you're paying for service that gets moderated and curated, there should be some legal expectation of process - a requirement that the service provider lay out a specific "due process" framework, even if it ends up mediated, that gives a customer legal recourse. Instead, they try to automate everything, which has notoriously indiscriminate collateral damage with no recourse.
If you pour significant chunk of your private time and money into a game, you should be entitled to not arbitrarily lose an account or gameplay progress because some poorly configured naive Bayes classifier decided you did something wrong, without corresponding evidence or recourse to undo bad bans.
For some reason companies are entitled to infinitely expand their reach without concurrently expanding their responsibilities in providing service to individuals. Must be nice.
Great question, I let Claude help answer this...see below:
The key differences are:
1. Static vs Runtime Analysis
Linters use AST parsing to analyze code structure without executing it. Tests verify actual runtime behavior. Example from our datetime_linter:
tree = ast.parse(file_path.read_text())
for node in ast.walk(tree):
if isinstance(node, ast.Import):
if alias.name == "datetime":
# Violation: should use pendulum
This catches import datetime syntactically. A test would need to actually execute code and observe wrong datetime behavior.
2. Feedback Loop Speed
- Linters: Run in pre-commit hooks. Agent writes code → instant feedback → fix → iterate in seconds
- Tests: Run in CI. Commit → push → wait minutes/hours → fix in next session
For AI agents, this is critical. A linter that blocks commit keeps them on track immediately rather than discovering violations after a test run.
3. Structural Violations
For example, our `fastapi_security_linter` catches things like "route missing TenantRouter decorator". These are structural violations - "you forgot to add X" - not "X doesn't work correctly." Tests verify the behavior of X when it exists.
4. Coverage Exhaustiveness
Linters scan all code paths structurally. Tests only cover scenarios you explicitly write. Our org_scope_linter catches every unscoped platform query across the entire codebase in one pass. Testing that would require writing a test for each query.
5. The Hybrid Value
We actually have both. The linter catches "you forgot the security decorator" instantly. The test (test_fastapi_authorization.py) verifies "the security decorator actually blocks unauthorized users at runtime." Different failure modes, complementary protections.
Think of it like: linters are compile-time checks, tests are runtime checks. TypeScript catches string + number at compile time; you don't write a test for that.
I’ve sent Claude back to look at the transcript file from before compaction. It was pretty bad at it but did eventually recover the prompt and solution from the jsonl file.
I did not know it was SQLite, thx for noting. That gives the idea to make an MCP server or Skill or classical script which can slurp those and make a PROMPTS.md or answer other questions via SQL. Will try that this week.
So much this! Nondeterministism in LLMs does not mean you simply "reroll" your prompt and hope for the best; the incremental reprompting is what helps.
I think 25% is a low estimate. Using a proper programming editor alone could realistically offer 2x or more productivity over a basic text editor, and there have definitely been programmers who stayed with basic editors.
And I have first hand seen programming teams where there was clearly more than a 25% difference — some could code much, and some could barely code at all.
I think it would be quite fair to say that, between tools and individual skill, there could easily be a 5x speed difference between slower and faster programmers, maybe more. Granted, LLMs are even faster, but I don’t think a 5x potential speed up was a slouch.
What I've seen is that the productive developers are the ones who understand what problem they are solving. They either take the time to think it through or just have an seemingly uncanny ability to see right to the heart of the problem. No false starts, no playing with different implementations. They write the code, it's efficient, and it works.
The slow developers have false starts, have to rework their code as they discover edge cases they didn't think about before, or get all the way through and realize they've solved the wrong problem altogether, or it's too slow at production scale.
reply