• heavydust@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    79
    ·
    20 hours ago

    Asking the machine to think for you makes you stupid. Incredible.

    And no, you can’t compare that to a calculator or any other program. A calculator will not do the whole reasoning for you.

      • ThePyroPython@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        18 hours ago

        Nope, it’s just a black box’s best guess as to what the reasoning should look like.

        Sort of how in an exam you give your best guess for an answer then jotting down some “working out” that you think looks sort-of correct and scraping enough marks to pass.

        Now imagine you’re not just trying to pass one question in one test in one subject but one question out of millions of possible questions in hundreds of thousands of possible subjects AND you experience time 5 million times slower than the examiner AND you had 3 years (in examiner time) to practice your guesswork.

        That’s it. That’s all this AI bullshit is doing. And people are racing to achieve the best monkey typewriter that requires the fewest bananas to work.

        • SpaceNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          2
          ·
          17 hours ago

          Not even that. It’s just a weighted model of what a sentence should look like, with no concept of factual correctness.

    • JustAnotherKay@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      19 hours ago

      To agree with you in different words, I would you argue that you can compare it to a calculator. Without the reasoning, a calculator is basically useless. I can tell you that 1.1(22 * 12 * 3) = 871.2 but it’s impossible to know what that number means or why it’s important from the information there. An LLM works the same way, I give it an equation (“prompt”) and it does some math to give me a response which is useless without context. It doesn’t actually answer the words in the prompt, it does (at best) guess-work based off the “value” of the text