LLMs are actually very, very useful for certain things.
The problem isn’t that they lack utility. It’s that they’re constantly being shoehorned into area where they aren’t useful.
They’re great at surfacing nee knowledge for things you don’t have a complete picture of. You can’t take that knowledge at face value but a framework that you can validate with external sources can be a massive timesaver
They’re good at summarizing text. They’re good at finding solutions to very narrow and specific coding challenges.
They’re not useful at providing support. They are not useful at detailing specific, technical issues. They are not good friends.
I keep saying that those llm peddlers are selling us a brain, when at most they only deliver Wernicke’s + Broca’s area of a brain.
Sure, they are necessary for a human like brain, but it’s only 10% of the job done my guys.
LLMs are actually very, very useful for certain things.
The problem isn’t that they lack utility. It’s that they’re constantly being shoehorned into area where they aren’t useful.
They’re great at surfacing nee knowledge for things you don’t have a complete picture of. You can’t take that knowledge at face value but a framework that you can validate with external sources can be a massive timesaver
They’re good at summarizing text. They’re good at finding solutions to very narrow and specific coding challenges.
They’re not useful at providing support. They are not useful at detailing specific, technical issues. They are not good friends.
Not even. LLMs don’t really understand what you say, and their output is often nonsensical babble.
you’re right. More like discussing with an Alzheimer’s addled brain being coerced into a particular set of vocabulary.