• FaceDeer@fedia.io
    link
    fedilink
    arrow-up
    10
    arrow-down
    6
    ·
    1 day ago

    The article literally shows how the goals are being set in this case. They’re prompts. The prompts are telling the AI what to do. I quoted one of them.

      • FaceDeer@fedia.io
        link
        fedilink
        arrow-up
        3
        arrow-down
        5
        ·
        1 day ago

        If you read the article (or my comment that quoted the article) you’ll see your assumption is wrong.

        • FiskFisk33@startrek.website
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 day ago

          Not the article, the commenter before you points at a deeper issue.

          It doesn’t matter how if your prompt tells it not to lie is it isn’t actually capable of following that instruction.

          • FaceDeer@fedia.io
            link
            fedilink
            arrow-up
            3
            arrow-down
            5
            ·
            1 day ago

            It is following the instructions it was given. That’s the point. It’s being told “promote this drug”, and so it’s promoting it, exactly as it was instructed to. It followed the instructions that it was given.

            Why are you think that the correct behaviour for the AI must be for it to be “truthful”? If it was being truthful then that would be an example of it failing to follow its instructions in this case.