I almost entirely agree with you, but the issue is that the information you currently have might not be enough to get the answers you want through pure deduction. So how do you get more information?
I think chatbots are a very clumsy way to get information. Conversations tend to be unfocused until you, the human, take an interest in something more specific and pursue it. You're still doing all the work.
It's also too easy to believe in the hype and think it's at least better than talking to another person with more limited knowledge. The fact is talking has always sucked. It's slow, but a human is still better because they can deduce in ways LLMs never will. Deduction is not mere pattern matching or correlation. Most key insights are the result of walking a long tight rope of deductions. LLMs are best at summarizing and assisting with search when you don't know where to start.
And so we are still better off reading a book containing properly curated knowledge, thinking about it for a while, and then socializing with other humans.
No I don’t think humans have some magical metaphysical deduction capability that LLMs lack exclusively.
I have had conversations and while they don’t have the exact attentiveness of a human, they get pretty close. But what they do have an advantage in is being an expert in almost any field.
Yes, LLMs have been a very expensive philosophy lesson for many investors. Ancient epistemology debates are now front and center for everyone to see. So-called "formal epistemology" is just empiricism in disguise attempting to borrow the credibility of rationalism and failing miserably.
LLMs are Bayesian inference and come with all its baggage. We definitely know brains are way better than that, even of other animals or insects.
Ultimately, there's no point in getting a chatbot to say deceptively expert-like words that are guaranteed by design to be lower quality than the books or blogs it learned from. LLMs are at best a search tool for those sources, and investor attitude now reflects that sanity with their confidence shifting back over to Google's offerings. Agentic AI is also pretty weak since agents are as functionally limited as any traditionally written computer program, but lacking the most crucial property of repeatability.
I find it shocking how many people didn't see this whole thing as a grift from day one. What else was SV going to do during the post-covid economic slump?
I think chatbots are a very clumsy way to get information. Conversations tend to be unfocused until you, the human, take an interest in something more specific and pursue it. You're still doing all the work.
It's also too easy to believe in the hype and think it's at least better than talking to another person with more limited knowledge. The fact is talking has always sucked. It's slow, but a human is still better because they can deduce in ways LLMs never will. Deduction is not mere pattern matching or correlation. Most key insights are the result of walking a long tight rope of deductions. LLMs are best at summarizing and assisting with search when you don't know where to start.
And so we are still better off reading a book containing properly curated knowledge, thinking about it for a while, and then socializing with other humans.