I should probably confess that as someone who lives in an area with a lot of construction work, I'm also very vulnerable to "prompt injection" when there's a person standing on the middle of the road holding a sign telling me to change course.
I once encountered an intersection with a big "NO ENTRY" sign on the other side. I turned but google maps wouldn't give me another route, so I did a u-turn and came back to it from the side. Which meant I was close enough to read the small text underneath that said "vehicles under 10 tons excepted". I don't think I've ever been so angry at a road sign.
By my work there is a nice clean sign at the main intersection that reads "NO RIGHT ON RED" with a separate smaller crusty looking sign below it that reads "4 to 5 PM" using a much smaller font. Of course the stark difference in signs means everyone just reads the shiny top sign and waits for the green at all times. I keep wanting to modify the sign to highlight the time.
I came across one in Italy that was meant to prevent you from using a street during school days from X to Y am, and Z to W pm, except on weekends, bank holidays and school holidays.
The idea is well-intentioned, but implementing it by making drivers try to parse arbitrarily complex conditionals while driving is unwise.
There's a sign near my house for a school zone with a reduced speed limit, that used to have conditions similar to the GP's example (though not quite as bad) But they recently attached a yellow light to the top of the sign and changed the condition to "when flashing." That's a much more effective solution.
Obviously. But you can also easily look around at the situation and know when the sign is fake and realize it may be a dangerous situation and disobey. Have you ever seen a green sign that says "Proceed" and just run through a red light because of it? No, you see a construction worker, you see big ass trucks, orange signs and warnings of workers everywhere. If you saw oncoming traffic and people in the road, would you just go because the construction worker flipped his STOP sign around?
Also, I thought we were supposed to make autonomous cars better than humans? What's with the constant excusing of the computer because people suck?
The big one for me is shell scripting and working with the terminal in general. For everything except for the simplest commands, I find myself preferring to ask in plain English, and while imperfect, it has both saved me time and decreased the number of times I've accidentally deleted/broken things, compared to me doing it manually.
If you want to contribute something to the discussion, do that, rather than just saying that you don't like the parent's argument, that's what the down button is for.
That's exactly the thing. Claude Code with Opus 4.5 is already significantly better at essentially everything than a large percentage of devs I had the displeasure of working with, including learning when asked to retain a memory. It's still very far from the best devs, but this is the worse it'll ever be, and it already significantly raised the bar for hiring.
And even if the models themselves for some reason were to never get better than what we have now, we've only scratched the surface of harnesses to make them better.
We know a lot about how to make groups of people achieve things individual members never could, and most of the same techiques work for LLMs, but it takes extra work to figure out how to most efficiently work around limitations such as lack of integrated long-term memory.
A lot of that work is in its infancy. E.g. I have a project I'm working on now where I'm up to a couple of dozens of agents, and ever day I'm learning more about how to structure them to squeeze the most out of the models.
One learning that feels relevant to the linked article: Instead of giving an agent the whole task across a large dataset that'd overwhelm context, it often helps to have an agent - that can use Haiku, because it's fine if its dumb - comb the data for <information relevant to the specific task>, and generate a list of information, and have the bigger model use that as a guide.
So the progress we're seeing is not just raw model improvements, but work like the one in this article: Figuring out how to squeeze the best results out of any given model, and that work would continue to yield improvements for years even if models somehow stopped improving.
If we have DRM with some private key, then I guess your idea is I download the game files and some private key and that allows me to run the game.
If I can send you the private key and the game and it allows you to run the game with no further inputs, then the DRM is trivially broken (even without open source).
If it does some online check, then if the source is open we can easily make a version that bypasses the online check.
If there is some check on the local PC (e.g. the key only works if some hardware ID is set correctly), we can easily find out what it checks, capture that information, package it, and make a new version of the launcher that uses this packaged data instead of the real machine data.
If you use a private key to go online and retrieve more data, having it be open source makes it trivial to capture that data, package it, and write a new version of the launcher that uses that packaged data.
Basically, DRM requires that there is something that is not easy to copy, and it being open source makes it a lot easier to copy.
- the game payload is sent to you encrypted using the public key of a secure enclave on your computer
- while the game runs all its memory is symmetrically encrypted (by your own CPU) using a key private to that secure enclave. It is only decrypted in the CPU's cache lines, which are flushed when the core runs anything other than the game (even OS code)
- the secure enclave refuses to switch to the context in which the CPU is allowed to use the decryption key unless a convolution-only (not overwriteable with arbitrary values) register inside itself had the correct value
- the convolution-only register is written with the "wrong" value, by your own computer's firmware, if you use a bootloader that is not trusted by the DRM system to disallow faking the register (ie, you need secure boot and a trusted OS)
That doesn't seem to fit in any of your models. There's no online check, you can't send someone else the key because it's held in hostile-to-you hardware, you can't bypass the local-PC check because it's entirely opaque to you (even the contents of RAM are encrypted). You can crack into a CPU itself I guess?
I don't think the mechanism of the DRM being open source helps with the copying AT ALL in this design.
This design is, by the way, quite realistic: most modern CPUs support MK-TME (encrypted RAM mediated by a TPM) and all Windows 11 PCs have a TPM. Companies just haven't gotten there yet.
I don't know about how secure enclaves work, so this may be a solution I'm not aware of. Thank you for explaining!
So I guess the whole game software, or at least a significant part, is loaded encrypted and runs encrypted. It's on the users hardware but the user can't access it.
The only thing I can think of: You say the game payload is encrypted using the public key of a secure enclave. This means the open source game launcher has to pass the public key to the server doing the encryption. Could you not supply a fake public key that goes to a virtual secure enclave? I guess the public key could be signed by intel or something, is that something that happens on current TPMs?
Would it even be possible to do this if the program had to run under Proton/Wine? The original subject here is the launcher running on Linux.
I do wander about the use of an open source launcher at this point though. As someone who prefers open source software, the idea of encrypted software running on my PC makes me uncomfortable, more than just closed source software.
The public key is in fact signed by Intel and uniquely serialized to the TPM.
If the game manufacturer requires TPM register values that match Windows, it will not run under Proton/Wine (or a Windows VM). If they allow TPM register values for Linux it will run under Linux too.
I remember reading an essay comparing one's personality to a polyhedral die, which rolls somewhat during our childhood and adolescence, and then mostly settles, but which can be re-rolled in some cases by using psychedelics. I don't have any direct experience with that, and definitely am not in a position to give advice, but just wondering whether we have a potential for plasticity that should be researched further, and that possibly AI can help us gain insights into how things might be.
Would be nice if there was an escape hatch here. Definitely better than the depressing thought I had, which is - to put in AI/tech terminology - that I'm already past my pre-training window (childhood / period of high neuroplasticity) and it's too late for me to fix my low prompt adherence (ability to set up rules for myself and stick to them, not necessarily via a Markdown file).
But that's what I mean. I'm pretty much clinically incapable of intentionally forming and maintaining habits. And I have a sinking feeling that it's something you either win or lose at in the genetic lottery at time of conception, or at best something you can develop in early life. That's what I meant by "being past my pre-training phase and being stuck with poor prompt adherence".
I used to be like you but a couple of years ago something clicked and I was able to build a bunch of extremely life changing habits - it took a long while but looking back I'm like a different person.
I couldn't really say what led to this change though, it wasn't like this "one weird trick" or something.
That being said I think "Tao of Puh" is a great self-help book
I can relate. It's definitely possible, but you have to really want it, and it takes a lot of work.
You need cybernetics (as in the feedback loop, the habit that monitors the process of adding habits). Meditate and/or journal. Therapy is also great. There are tracking apps that may help. Some folks really like habitica/habit rpg.
You also need operant conditioning: you need a stimulus/trigger, and you need a reward. Could be as simple as letting yourself have a piece of candy.
Anything that enhances neuroplasticity helps: exercise, learning, eat/sleep right, novelty, adhd meds if that's something you need, psychedelics can help if used carefully.
I'm hardly any good at it myself but it's been some progress.
Right. I know about all these things (but thanks for listing them!) as I've been struggling with it for nearly two decades, with little progress to show.
I keep gravitating to the term, "prompt adherence", because it feels like it describes the root meta-problem I have: I can set up a system, but I can't seem to get myself to follow it for more than a few days - including especially a system to set up and maintain systems. I feel that if I could crack that, set up this "habit that monitors the process of adding habits" and actually stick to it long-term, I could brute-force my way out of every other problem.
If it's any help, one of the statements that stuck with me the most about "doing the thing" is from Amy Hoy:
> You know perfectly well how to achieve things without motivation.[1]
I'll also note that I'm a firm believer in removing the mental load of fake desires: If you think you want the result, but you don't actually want to do the process to get to the result, you should free yourself and stop assuming you want the result at all. Forcing that separation frees up energy and mental space for moving towards the few things you want enough.
> I keep gravitating to the term, "prompt adherence", because it feels like it describes the root meta-problem I have: I can set up a system, but I can't seem to get myself to follow it for more than a few days - including especially a system to set up and maintain systems. I feel that if I could crack that, set up this "habit that monitors the process of adding habits" and actually stick to it long-term, I could brute-force my way out of every other problem.
For what it’s worth, I’ve fallen into the trap of building an “ideal” system that I don’t use. Whether that’s a personal knowledge db , automations for tracking habits, etc.
The thing I’ve learned is for a new habit, it should have really really minimal maintenance and minimal new skill sets above the actual habit. Start with pen and paper, and make small optimizations over time. Only once you have engrained the habit of doing the thing, should you worry about optimizing it
I thought the same thing about myself until I read Tiny Habits by BJ Fogg. Changed my mental model for what habits really are and how to engineer habitual change. I immediately started flossing and haven't quit in the three years since reading. It's very worth reading because there are concrete, research backed frameworks for rewiring habits.
The brain remains plastic for life, and if you're insane about it, there are entire classes of drugs that induce BDNF production in various parts of the brain.
They can if given write access to "SOUL.md" (or "AGENT.md" or ".cursor" or whatever).
It's actually one of the "secret tricks" from last year, that seems to have been forgotten now that people can "afford"[0] running dozens of agents in parallel. Before everyone's focus shifted from single-agent performance to orchestration, one power move was to allow and encourage the agent to edit its own prompt/guidelines file during the agentic session, so over time and many sessions, the prompt will become tuned to both LLM's idiosyncrasies and your own expectations. This was in addition to having the agent maintain a TODO list and a "memory" file, both of which eventually became standard parts of agentic runtimes.
If cooling is such an important factor compared to everything else, I would assume we should see data centers in Antarctica long before we see them on the Moon.
Something tells me that Musk isn't the sort of person who'd ever be satisfied. It's easier for me to imagine him like Mr. House from Fallout, trying to control everything over centuries.
This is true of every billionaire who is still actively trying to get more money. If you're not satisfied at that point, there's no number where you will be.
Google has made it clear that Genie doesn't maintain an explicit 3D scene representation, so I don't think hooking in "assists" like that is on the table. Even if it were, the AI layer would still have to infer things like object weight, density, friction and linkages correctly. Garbage in, garbage out.
Google could build try to build an actual 3d scene with ai using meshes or metaballs or something. That would allow for more persistance, but I expect makes the ai more brittle and limited, and, because it doesn't really understand the rules for the 3d meshes it created, it doesn't know how to interact with them. It can only be fluffy-mushy dream images.
reply