• koper@feddit.nl
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    1 day ago

    Ok, so your point is that people who interact with these AI systems will know that it can’t be trusted and that will alleviate the negative consequences of its misinformation.

    The problems with that argument are many:

    • The vast majority of people are not AI experts and do in fact have a lot of trust in such systems

    • Even people who do know often have no other choice. You don’t get to talk to a human, it’s this chatbot or nothing. And that’s assuming the AI slop is even labelled as such.

    • Even knowing that the information can be misleading does not help much. If you sell me a bowl of candy and tell me that 10% of them are poisoned, I’m still going to demand non-poisoned candy. The fact that people can no longer rely on accurate information should be unacceptable.

    • Dzso@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 day ago

      Your argument is basically “people are stupid”, and I don’t disagree with you. But it’s actually an argument in favor of my point which is: educate people.

      • koper@feddit.nl
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 day ago

        That was only my first point. In my second and third point I explained why education is not going to solve this problem. That’s like poisoning their candy and then educating them about it.

        I’ll add to say that these AI applications only work because people trust their output. If everyone saw them for the cheap party tricks that they are, they wouldn’t be used in the first place.