Theyre alot more than "suggestion engines". They can reason with you, show you examples, tell you how to dig deeper and verify what theyre saying, etc.
Yeah, for sure, I just don't have many of those. For example, the only use I have for Haiku is for summarizing webpages, or Sonnet for coding something after Opus produces a very detailed plan.
Maybe I should try local models for home automation, Qwen must be great at that.
Ok imagine you went back 30 years and you had a swarm of experts around you who you could ask anything you wanted and they would even do the work for you if you wanted.
Does this mean youd be incapable of learning anything? Or could you possibly learn way more because you had the innate desire to learn and understand along with the best tool possible to do it?
Its the same thing here. How you use LLMs is all up to your mindset. Throughly review and ask questions on what it did, or why, ask if we could have done it some other way instead. Hell ask it just the questions you need and do it yourself, or dont use it at all. I was working on C++ for example with a heavy use of mutexs, shared and weak pointers which I havent done before. LLM fixed a race condition, and I got to ask it precisely what the issue was, to draw a diagram showing what was happening in this exact scenario before and after.
I feel like Im learning more because I am doing way more high level things now, and spending way less time on the stuff I already know or dont care to know (non fundementals, like syntax and even libraries/frameworks). For example, I don't really give a fuck about being an expert in Spring Security. I care about how authentication works as a principal, what methods would be best for what, etc but do I want to spend 3 hours trying to debug the nuances of configuring the Spring security library for a small project I dont care about?
> Does this mean you'd be incapable of learning anything?
Yes. This strikes me as obvious. People don't have the sort of impulse control you're implying by default, it has to be learnt just like anything else. This sort of environment would make you an idiot if it's all you've ever known.
You might as well be saying that you can just explain to children why they should eat their vegetables and rely on them to be rational actors.
I agree with your premise, but this example I strongly disagree with:
> You might as well be saying that you can just explain to children why they should eat their vegetables and rely on them to be rational actors.
YES! Explain to them, and trust them. They might not do exactly as you wish for them, but I'll bet you don't do exactly as you wish for yourself either. The children need your trust and they must learn how to navigate this world by themselves, with parents providing guidance and only taking the hard stance (but still explaining and discussing!) when safety is concerned. Also, lead by example. If you eat vegetables then children are likely to eat them too. The children are not stupid, they just don't have enough experience yet. Which you gain by trying (and failing), not by listening.
You're right, it was a bad example. I also don't eat my vegetables. I was more trying to make the point that most of us are not rational actors either, was just using children as a convenient proxy, unfairly.
I see it as being more personality/interest than impulse control. A curious/interested person would try and get involved and be a part of it, someone uninterested will just say what's the point and get by having the work done for them.
It may very well have stunted my learning. What’s the point of absorbing information when you have a consortium of experts available 24/7?
Saying what you said about it being down to being how you use LLM comes from a privileged position. You likely already know how to code. You likely know how to troubleshoot. Would you develop those same skillsets today starting from zero?
Supposedly because AI has limits and you still have to know what you're doing so you can guide it and do it better.
If that's not true, then what's the problem with not learning the material? Go do something more productive with your time if the personal curiosity isn't good enough. Were in a whole new world.
>Saying what you said about it being down to being how you use LLM comes from a privileged position. You likely already know how to code. You likely know how to troubleshoot. Would you develop those same skillsets today starting from zero?
This is true, and I can't answer that 100% confidently. I imagine I would just be doing more more/complicated things and learning higher level concepts. For example, if right off the bat I could produce a web app, Id want to deploy it somewhere. So Id come across things like ssh, nginx, port forwarding, jars, bundles, DNS, authentication, etc. Do this a 1000 times just the way I wrote 1000 different little functions or programs by hand and you'll no shit absorb little here and there as issues come up. Or maybe if whats hard a year ago is easy today, Id want to do something far more incredibly complex than anything anyone's been able to imagine before, and learn in that struggle.
Programmers in the 90s were far more apt at understanding CPU registers, memory and all sorts of low level stuff. Then the abstraction moved up the stack, and then again and again. I think same thing will happen.
Also, you can't say Im in a privileged position for already knowing how to code and at the same time asking what's the point of learning it yourself.
The problem is that the abstraction level moved up so far that we're now programming in the English language, and we're more like managers than programmers. This will only get worse. The next step will be that AIs run entire companies. And BigAI will not allow us to profit from that because they will just run the AI themselves, the current situation was just a stepping stone.
Hmm no way. Ive used to see hallucinations like 50% of the time prompting gpt3.5 for simple functions.
I don't remember the last time ive seen a made up library/methods these days and Im definitely using way more for more complex stuff. The tool calling changed the game.
Even for work I do almost 100% of my coding telling claude what to do. I mean I break down the tasks and tell it more or less exactly what I want but I find "rename this thing across these two repos" easier than doing it myself
Claude max $100 is way more usage than I need. And yeah its not running all the time, just has a heartbeat file telling it how to check something and run
> And it's not that hard to just run it in docker if you're so worried
There is risk of damage to ones local machine and data as well as reputational risk if it has access to outside services. Imagine your socials filled with hate, ala Microsoft Tay, because it was red pilled.
Though given the current cultural winds perhaps that could be seen as a positive?
But the intent is to make as much as money as possible with zero care for the users well being.
I worked at Tinder for example and you would think that company in an ethical world would be thinking about how to make dating better, how to make people more matches spending less time on the app. Nope, we literally had projects called "Whale" and the focus was selling on absolutely useful and even harmful features that generated money
Theyre alot more than "suggestion engines". They can reason with you, show you examples, tell you how to dig deeper and verify what theyre saying, etc.
reply