In this case the original title "ClawdBot Skills ganked all my crypto" was both linkbait and misleading, because (unless I missed it), the article describes no actual such incident.
I have not been following this whole thing closely, but this is where my mind went as soon as I heard there was some overlap in the popularity of this new un-sandboxed agent and people who are into crypto. It's like if everyone who is into buying physical gold started doing a Tiktok challenge to post pictures of their houses and leave their front doors unlocked.
People say the reason nigerian prince scammers use such ridiculous story, or bank phishing has so many typos, is to pre-filter dumb and gullible people so the scammers don't waste time on targets that won't get scammed in the end.
All these AI "hacks" seem to be based on the same principle.
To your point, from the article: "To me, giving a Claude skill all your credentials, and access to everything important to you, and then managing it all via Telegram seems ludicrous, but who am I to judge."
Watching folks speed-run this whole thing is kind of funny from the outside.
I wonder if anyone with a correct mental model of how LLM agents work (i.e, does not conceptualize them as intelligent entities) has actually granted them any permissions for their own life... personally, I couldn't imagine doing so.
Let alone crypto, the risk of reputational loss for actions performed on my behalf (even just spamming personal or professional contacts) is just too high.
I mean… If you have a mental model of LLM agents as intelligent entities, why are you granting them credentials? How many intelligent entities have you shared your Coinbase login with?
The conceptual problem is that there is a huge intersection between the set of "things the agent needs to be able to do in order to be useful" and "things that are potentially dangerous."
I installed it on a spare computer, physically separated. My bigger concern is giving it access to accounts online, without those however it is not very cool.
> I'd like to share a revelation that I've had during my time here. It came to me when I tried to classify your species. I realized that you're not actually mammals. Every mammal on this planet instinctively develops a natural equilibrium with the surrounding environment, but you humans do not. You move to an area, and you multiply, and multiply, until every natural resource is consumed. The only way you can survive is to spread to another area. There is another organism on this planet that follows the same pattern. Do you know what it is? A virus. Human beings are a disease, a cancer of this planet, you are a plague, and we are the cure.
Viruses do not multiply endlessly. Most viruses exist in stable ecological cycles.
Most viruses are beneficial to life. We complain about the few (and tiny minority of viruses) that infect humans and we do so from a selfish perspective, but forget about all the other that make life and evolution possible.
As a matter of fact evolution favors reduced lethality in many cases because wiping out hosts is bad for viral survival.
The bit about mammals is wildly off base too. Boom and bust dynamics are built into most animal populations (i.e. red tailed deer in the northeastern US). Pretty much the only examples I can think of that don't experience those cycles live in very isolated environments like caves or have very long lifespans and large parental investment but even then the dynamic is only dampened, not eliminated entirely.
no, i remembered it being a quote from some famous scientist, and googling a bit now I see it was stephen hawking:
I think computer viruses should count as life ... I think it says something about human nature that the only form of life we have created so far is purely destructive. We've created life in our own image.
Interesting that he would consider software a new life form. I think our organizations are really the higher life form above Apex humans.
When we have computer systems acting as corporation owners, and we begin to thrive in working for those corporations… That’s really going to change the picture.
perhaps, though also "humans are the plague" is a popular trope in science fiction. e.g. this one is from pratchett, in a conversation between rats in "the amazing maurice and his educated rodents":
You will have worked out that there is a race in this world which steals and kills and spreads disease
and despoils what it cannot use, said the voice of Spider.
'Yes,' said Dangerous Beans. 'That's easy. It's called humanity.'
AI has developed this entire culture of people who are "into tech" but seem to not understand how a computer works in a meaningful way. At the very least you'd think they'd ask a chatbot if what they're doing is a bad idea!
I have a separate removable SSD I can boot from to work with Claude in a dedicated environment. It is nice being able to offload environment set up and what not to the agent. That environment has wifi credentials for an isolated LAN. I am much more permissive of Claude on that system. I even automatically allow it WebSearch, but not WebFetch (much larger injection surface). It still cannot do anything requiring sudo.
They are not. Many people are doing this; I don't think there's enough data to say "most," but there's at least anecdotal discussions of people buying Mac minis for the purpose. I know someone who's running it on a spare Mac mini (but it has Internet access and some credentials, so...).
I'd call it "suspicious" that this latest idiocy came out of nowhere and got pushed so hard to normies, when results like this are 100% predictable... if it wasn't also consistent with how the AI industry itself operates.
One could reasonably ask: out of the hundreds (thousands?) of similar "personal AI assistant" tools out there, why did this specific one blow up so dramatically and in such a short period of time? https://www.star-history.com/#openclaw/openclaw&type=date&le...
But to be clear, I'm saying I don't think this is especially suspicious, because actual AI companies are releasing products in exactly the same way, with warning labels that they know users will ignore / aren't capable of assessing in the first place.
GitHub stars are not a reliable metric[1]. Neither is engagement on social media, which is ridden with bots. It would be safe to assume that a project promoting bots is also using them to appear popular.
This whole thing is a classic pump and dump scheme, which this technology has made easier and more accessible than ever. I wouldn't be surprised if the malware authors are the same people behind these projects.
It really is a huge bummer that the most important new technologies of this era have such a film of slime on them. Crypto, AI, whatever comes next, it's just no longer an era in which we can expect innovation to make our lives better. It enables grifters and scammers more than anyone else.
Like I say, the tech is cool but they are doomed to fail (partially because of grift) [although in context of crypto stablecoins/gold (paxos) is the one thing I liked and it did go great for me in terms of gold]
I hope it doesn't count as promotion but I had literally written a blog post about it and made an account literally named justforhn on mataroa when someone was discussing crypto with me in here or something
Maybe its time for me to write part II: Most AI is doomed to fall, the tech is cool though.
I guess I can write it but I already write like this in HN. The procastination of writing specifically in a blog is something which hits me.
Is it just me or is it someone else too? Because on HN I can literally write like novels (or I may have genuinely written enough characters of a novel here, I might have to test it or something lol, got a cool idea right now to measure how many novels a person has written from just their username, time to code it)
Yes, grifters latching onto the newest technology to sell snake oil is a brand new phenomenon and definitely not literally a fundamental part of new technology.
This was inevitable, better now than later when the damage is less widespread. Now clawdbot (or whatever they decide to call themselves) will have to respond with better security safety nets. Individually will always naively download whatever is on the internet. Platforms needs to safeguard against that.
Remember the early days of Windows? yea it's gonna happen again with AI.
> I don’t know how many people are involved in managing the ClawHub registry, but there is no evidence that the skills listed there are scanned by any security tooling. Many of the payloads we found were visible in plain text in the first paragraph of the SKILL.md file.
I shouldn't still be shocked by the incompetence and/or negligence of these people, and yet I am.
Even outside skills, prompt-injection is still unsolvable and the agents need credentials to do anything useful so these things are basically impossible to secure.
I can understand the thought process, although I do not agree with it, of using Clawdbot/Openclaw. I do not understand the thought process of downloading random human-readable instructions or "skills" (especially those pertaining to the manipulation of crypocurrency) and giving it to something in charge of your system without at least reading them first.
I've heard people granting access to their production servers to this thing. Apparently you can ask it to check logs to find solutions to some errors or whatever. Gotta be a complete moron to do that.
I've only installed it on a fresh VM and the first impression was underwhelming. Maybe there is some magic I can't see.
I think we all knew this would happen quickly. Clearly there's a demand for personal AI agents - does anyone have thoughts on what it would take to make a more secure one? Would current services like email need to be redesigned to accommodate AI agents?
* Clear labeling of action types (read/get vs write/post)
* A better way of describing what an agent is potentially about to do (based purely on the functions the agent is about to call)
* More occurrences of AI agents hurting more than helping in the current ecosystem
Agreed. This is a standard supply chain attack that has little to do with AI except that it is written in the 'english-as-a-scripting-language' that LLMs execute.
Every repository is vulnerable to this kind of attack, and pip/npm have been attacked in many times in similar ways.
Ok I ask chat GPT sometimes for advice in health / Fitness and also finance. Not like where to put my money but for general Information how stuff works what would apply here and there. The issue is already that OpenAI knows a lot of me. And ChatGPT itself when asked what he things I am etc draws a pretty clear picture. But I stay away from oversharing specific things. That is mainly my income and other super detailed data. When I ask I try to formulate it to use simple numbers and examples. Works for me. When working with coding agents I’m very skeptical to whitelist stuff. It takes quite the while before I allow a generic command to be executed outside of a sandbox. But to install a random skill to help with Finance Automation… can’t belief it. Under what stone do you have to live to trust your money be handed by an agent and then also in connection with a random skill?
You have "memory" activated in your settings. It is recording information about you and using it in future conversations. Have a look at settings > personalization
What does this matter? Even if I disable it I send enough of data. The point I tried to make was that it baffles me that others just trust theses tools. I’m aware that I send data to OpenAI. I know that chatGPT has a memory feature. But I’m not so naive to think that just because I disabled this magic checkbox the other side might not continue to collect and store data.
Seems like essentially the same threat vector as with NPM.
Not quite related: I never heard of clawdbot before, so, I guess TIL that's the bot my website keeps getting requests that are obviously malicious from.
This is funny, I was discussing moltbook with Claude and it told me there's already a crypto. I thought that's pretty funny, I might want to get some, but can't be arsed to figure it out.
"Do you think I could just give molt a BTC wallet with a bit of funds and tell it to figure out how to buy some?"
-"Yes, but it wouldn't be long before you get pwned."
... Six hours later, this pops on the front page :)
Well no, that's really not related to the issue at all.
This is a bog-standard supply chain attack against their skills repository. It's not an LLM-specific attack, and nearly every repository (pip, npm, etc) has been subject to similar malware.
I only heard about it this week. Then saw a former colleague post about it yesterday. Feels like its only just now breaking into mainstream tech awareness, I'm sure most of my colleagues haven't heard of it.
In this case the original title "ClawdBot Skills ganked all my crypto" was both linkbait and misleading, because (unless I missed it), the article describes no actual such incident.
reply