• Assassassin@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    33
    ·
    7 days ago

    I’ll be the first to hop in on the AI hate train, but isn’t this just broadly true of all humans? We’re pretty notoriously awful at identifying our own gaps in knowledge and skill. I imagine that the constant confirmation from AI exacerbates the issue, but I don’t think it’s entirely AI’s fault that people are bad at recognizing their shortcomings.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      The AI also has the tendency inherited from the broad human tendency in training.

      So you get overconfident human + overconfident AI which leads to a feedback loop that lands even more confident in BS than a human alone.

      AI can routinely be confidently incorrect. Especially people who don’t realize this and don’t question outputs when it aligns with their confirmation biases end up misled.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      This article is about how AI exacerbates those tendencies. And since there are so few ways to accurately measure the functionality of AI in general, those self-segments are a significant portion of AI’s value proposition.

  • ArcaneSlime@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    7 days ago

    So do I sometimes but that’s just ADHD, hatred of banal competition, imposter syndrome, or simply “still learning the thing.”

    Yes I hate corporate self evaluations, how could you tell? Fuck it, I’ll just put “I’m the absolute best person to ever walk the earth mr bossman. Money me please” again, because fuck you for making me do this in the first place.

  • Aria@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 days ago

    The results of the second study mirrored the first. The monetary incentive did not correct the overestimation bias. The group using AI continued to perform better than the unaided group but persisted in overestimating their scores. The unaided group showed the classic Dunning-Kruger pattern, where the least skilled participants showed the most bias. The AI group again showed a uniform bias, confirming that the technology fundamentally shifts how users perceive their competence.

    So it’s only high performers that are affected then, no? I also wish the article would mention the average bias from the control group. I know the curve looks different, but it sounds like they’re probably only talking about a single answer worth of difference between the groups, and with only ~600 participants that doesn’t seem that significant.

    The researchers noted that most participants acted as passive recipients of information. They frequently copied and pasted questions into the chat and accepted the AI’s output without significant challenge or verification. Only a small fraction of users treated the AI as a collaborative partner or a tool for double-checking their own logic.

    So then it’s possible that they correctly assessed that they’re worse at the test than the AI as established earlier in the article. That seems pretty important. I’m sure it’s covered in the actual paper but I can only access the article.