• Dzso@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    4 hours ago

    I’m not saying humans are infallible at recognizing truth either. That’s why so many of us fall for the untruths that AI tells us. But we have access to many tools that help us evaluate truth. AI is emphatically NOT the right tool for that job. Period.

    • Zozano@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      37 minutes ago

      Right now, the capabilities of LLM’s are the worst they’ll ever be. It could literally be tomorrow that someone drops and LLM that would be perfectly calibrated to evaluate truth claims. But right now, we’re at least 90% of the way there.

      The reason people fail to understand the untruths of AI is the same reason people hurt themselves with power tools, or use a calculator wrong.

      You don’t blame the tool, you blame the user. LLM’s are no different. You can prompt GPT to intentionally give you bad info, or lead it to give you bad info by posting increasingly deranged statements. If you stay coherent, well read and make an attempt at structuring arguments to the best of your ability, the pool of data GPT pulls from narrows enough to be more useful than anything else I know.

      I’m curious as to what you regard as a better tool for evaluating truth?

      Period.

      • Dzso@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        27 minutes ago

        You don’t understand what an LLM is, or how it works. They do not think, they are not intelligent, they do not evaluate truth. It doesn’t matter how smart you think you are. In fact, thinking you’re so smart that you can get an LLM to tell you the truth is downright dangerous naïveté.

        • Zozano@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          10 minutes ago

          I do understand what an LLM is. It’s a probabilistic model trained on massive corpora to predict the most likely next token given a context window. I know it’s not sentient and doesn’t “think,” and doesn’t have beliefs. That’s not in dispute.

          But none of that disqualifies it from being useful in evaluating truth claims. Evaluating truth isn’t about thinking in the human sense, it’s about pattern-matching valid reasoning, sourcing relevant evidence, and identifying contradictions or unsupported claims. LLMs do that very well, especially when prompted properly.

          Your insistence that this is “dangerous naïveté” confuses two very different things: trusting an LLM blindly, versus leveraging it with informed oversight. I’m not saying GPT magically knows truth, I’m saying it can be used as a tool in a truth-seeking process, just like search engines, logic textbooks, or scientific journals. None of those are conscious either, yet we use them to get closer to truth.

          You’re worried about misuse, and so am I. But claiming the tool is inherently useless because it lacks consciousness is like saying microscopes can’t discover bacteria because they don’t know what they’re looking at.

          So again: if you believe GPT is inherently incapable of aiding in truth evaluation, the burden’s on you to propose a more effective tool that’s publicly accessible, scalable, and consistent. I’ll wait.

          • Dzso@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 minutes ago

            What you’re describing is not an LLM, it’s tools that an LLM is programmed to use.

            • Zozano@aussie.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              12 seconds ago

              No, I’m specifically describing what an LLM is. It’s a statistical model trained on token sequences to generate contextually appropriate outputs. That’s not “tools it uses,” that is the model. When I said it pattern-matches reasoning and identifies contradictions, I wasn’t talking about external plug-ins or retrieval tools, I meant the LLM’s own internal learned representation of language, logic, and discourse.

              You’re drawing a false distinction. When GPT flags contradictions, weighs claims, or mirrors structured reasoning, it’s not outsourcing that to some other tool, it’s doing what it was trained to do. It doesn’t need to understand truth like a human to model the structure of truthful argumentation, especially if the prompt constrains it toward epistemic rigor.

              Now, if you’re talking about things like code execution, search, or retrieval-augmented generation, then sure, those are tools it can use. But none of that was part of my argument. The ability to track coherence, cite counterexamples, or spot logical fallacies is all within the base LLM. That’s just weights and training.

              So unless your point is that LLMs aren’t humans, which is obvious and irrelevant, all you’ve done is attack your own straw man.