• FauxLiving@lemmy.world
    link
    fedilink
    English
    arrow-up
    80
    arrow-down
    6
    ·
    edit-2
    1 day ago

    This research is good, valuable and desperately needed. The uproar online is predictable and could possibly help bring attention to the issue of LLM-enabled bots manipulating social media.

    This research isn’t what you should get mad it. It’s pretty common knowledge online that Reddit is dominated by bots. Advertising bots, scam bots, political bots, etc.

    Intelligence services of nation states and political actors seeking power are all running these kind of influence operations on social media, using bot posters to dominate the conversations about the topics that they want. This is pretty common knowledge in social media spaces. Go to any politically charged topic on international affairs and you will notice that something seems off, it’s hard to say exactly what it is… but if you’ve been active online for a long time you can recognize that something seems wrong.

    We’ve seen how effective this manipulation is on changing the public view (see: Cambridge Analytica, or if you don’t know what that is watch ‘The Great Hack’ documentary) and so it is only natural to wonder how much more effective online manipulation is now that bad actors can use LLMs.

    This study is by a group of scientists who are trying to figure that out. The only difference is that they’re publishing their findings in order to inform the public. Whereas Russia isn’t doing us the same favors.

    Naturally, it is in the interest of everyone using LLMs to manipulate the online conversation that this kind of research is never done. Having this information public could lead to reforms, regulations and effective counter strategies. It is no surprise that you see a bunch of social media ‘users’ creating a huge uproar.


    Most of you, who don’t work in tech spaces, may not understand just how easy and cheap it is to set something like this up. For a few million dollars and a small staff you could essentially dominate a large multi-million subscriber subreddit with whatever opinion you wanted to push. Bots generate variations of the opinion that you want to push, the bot accounts (guided by humans) downvote everyone else out of the conversation and, in addition, moderation power can be seized, stolen or bought to further control the conversation.

    Or, wholly fabricated subreddits can be created. A few months prior to the US election there were several new subreddits which were created and catapulted to popularity despite just being a bunch of bots reposting news. Now those subreddits are high in the /all and /popular feeds, despite their moderators and a huge portion of the users being bots.

    We desperately need this kind of study to keep from drowning in a sea of fake people who will tirelessly work to convince you of all manner of nonsense.

    • andros_rex@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      1 day ago

      Regardless of any value you might see from the research, it was not conducted ethically. Allowing unethical research to be published encourages further unethical research.

      This flat out should not have passed review. There should be consequences.

      • deutros@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        22 hours ago

        If the need was justified big enough and negative impact low enough, it could pass review. The lack of informed consent can be justified with sufficient need and if consent would impact the science. The burden is high but not impossible to overcome. This is an area with huge societal impact so I would consider an ethical case to be plausible.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      24 hours ago

      Conversely, while the research is good in theory, the data isn’t that reliable.

      The subreddit has rules requiring users engage with everything as though it was written by real people in good faith. Users aren’t likely to point out a bot when the rules explicitly prevent them from doing that.

      There wasn’t much of a good control either. The researchers were comparing themselves to the bots, so it could easily be that they themselves were less convincing, since they were acting outside of their area of expertise.

      And that’s even before the whole ethical mess that is experimenting on people without their consent. Post-hoc consent is not informed consent, and that is the crux of human experimentation.

      • thanksforallthefish@literature.cafe
        link
        fedilink
        English
        arrow-up
        2
        ·
        13 hours ago

        Users aren’t likely to point out a bot when the rules explicitly prevent them from doing that.

        In fact one user commented that he had his comment calling out one of the bots as a bot deleted by mods for breaking that rule

  • MonkderVierte@lemmy.ml
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    2 days ago

    When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.

    Not since the APIcalypse at least.

    Aside from that, this is just reheated news (for clicks i assume) from a week or two ago.

    • ClamDrinker@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 day ago

      One likely reason the backlash has been so strong is because, on a platform as close-knit as Reddit, betrayal cuts deep.

      Another laughable quote after the APIcalypse, at least for the people that remained on Reddit after being totally ok with being betrayed.

  • conicalscientist@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    2
    ·
    2 days ago

    This is probably the most ethical you’ll ever see it. There are definitely organizations committing far worse experiments.

    Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 days ago

      Yeah I was thinking exactly this.

      It’s easy to point to reasons why this study was unethical, but the ugly truth is that bad actors all over the world are performing trials exactly like this all the time - do we really want the only people who know how this kind of manipulation works to be state psyop agencies, SEO bros, and astroturfing agencies working for oil/arms/religion lobbyists?

      Seems like it’s much better long term to have all these tricks out in the open so we know what we’re dealing with, because they’re happening whether it gets published or not.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Over the years I’ve noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I’ve learned to disengage at that point. It’s either they scrolled through my profile. Or as we now know it’s a literal psy-op bot. Already in the first case it’s not worth engaging with someone more invested than I am myself.

      You put it better than I could. I’ve noticed this too.

      I used to just disengage. Now when I find myself talking to someone like this I use my own local LLM to generate replies just to waste their time. I’m doing this by prompting the LLM to take a chastising tone, point out their fallacies and to lecture them on good faith participation in online conversations.

      It is horrifying to see how many bots you catch like this. It is certainly bots, or else there are suddenly a lot more people that will go 10-20 multi-paragraph replies deep into a conversation despite talking to something that is obviously (to a trained human) just generated comments.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 day ago

          I think the simplest way to explain it is that the average person isn’t very skilled at rhetoric. They argue inelegantly. Over a long time of talking online, you get used to talking with people and seeing how they respond to different rhetorical strategies.

          In these bot infested social spaces it seems like there are a large number of commenters who just seem to argue way too well and also deploy a huge amount of fallacies. This could be explained, individually, by a person who is simply choosing to argue in bad faith; but, in these online spaces there seem to be too many commenters who seem to deploy these tactics compared to the baseline that I’ve established in my decades of talking to people online.

          In addition, what you see in some of these spaces are commenters who seem to have a very structured way of arguing. Like they’ve picked your comment apart into bullet points and then selected arguments against each point which are technically on topic but misleading in a way.

          I’ll admit that this is all very subjective. It’s entirely based on my perception and noticing patterns that may or may not exist. This is exactly why we need research on the topic, like in the OP, so that we can create effective and objective metrics for tracking this.

          For example, if you could somehow measure how many good faith comments vs how many fallacy-laden comments in a given community there would likely be a ratio that is normal (i.e. there are 10 people who are bad at arguing for every 1 person who is good at arguing and, of those skilled arguers 10% of them are commenting in bad faith and using fallacies) and you could compare this ratio to various online topics to discover the ones that appear to be botted.

          That way you could objectively say that on the topic of Gun Control on this one specific subreddit we’re seeing an elevated ratio of bad faith:good faith scoring commenters and, therefore, we know that this topic/subreddit is being actively LLM botted. This information could be used to deploy anti-bot counter measures (captchas, for example).

          • ibelieveinthehousehippo@lemmy.ca
            link
            fedilink
            English
            arrow-up
            3
            ·
            23 hours ago

            Thanks for replying

            Do you think response time could also indicate that a user is a bot? I’ve had an interaction that I chalked up to someone using AI, but looking back now I’m questioning if there was much human involvement at all just due to how quickly the detailed replies were coming in…

            • FauxLiving@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              18 hours ago

              It depends, but it’d be really hard to tell. I type around 90-100 WPM, so my comment only took me a few minutes.

              If they’re responding within a second or two with a giant wall of text it could be a bot, but it may just be a person who’s staring at the notification screen waiting to reply. It’s hard to say.

  • flango@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    23
    ·
    2 days ago

    […] I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    29
    ·
    2 days ago

    The reason this is “The Worst Internet-Research Ethics Violation” is because it has exposed what Cambridge Analytica’s successors already realized and are actively exploiting. Just a few months ago it was literally Meta itself running AI accounts trying to pass off as normal users, and not an f-ing peep - why do people think they, the ones who enabled Cambridge Analytica, were trying this shit to begin with. The only difference now is that everyone doing it knows to do it as a “unaffiliated” anonymous third party.

    • tauren@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      2 days ago

      Just a few months ago it was literally Meta itself…

      Well, it’s Meta. When it comes to science and academic research, they have rather strict rules and committees to ensure that an experiment is ethical.

      • thanksforallthefish@literature.cafe
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        2 days ago

        You may wish to reword. The unspecified “they” reads like you think Meta have strict ethical rules. Lol.

        Meta have no ethics whatsoever, and yes I assume you meant universities have strict rules however the approval of this study marks even that as questionable

      • FarceOfWill@infosec.pub
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        The headline is that they advertised beauty products to girls after they detected them deleting a selfie. No ethics or morals at all

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      One of the Twitter leaks showed a user database that effectively had more users than there were people on earth with access to the Internet.

      Before Elon bought the company he was trashing them on social media for being mostly bots. He’s obviously stopped that now that he was forced to buy it, but the fact that Twitter (and, by extension, all social spaces) are mostly bots remains.

  • perestroika@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    2 days ago

    The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

    This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:

    • accept that negative publicity will result
    • accept that people may stop cooperating with them on this work
    • accept that their reputation will suffer as a result
    • ensure that they won’t do anything illegal

    After that, if they still feel their study is necesary, maybe they should run it and publish the results.

    If then, some eager redditors start sending death threats, that’s unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.

    As for the question of whether a tailor-made response considering someone’s background can sway opinions better - that’s been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)

    AI bots which take into consideration a person’s background will - if implemented right - indeed be more powerful at swaying opinions.

    As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn’t needed after all.

    • Djinn_Indigo@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 day ago

      But those other studies didn’t make the news though, did they? The thing about scientists is that they aren’t just scientists, and the impact of their work goes beyond the papers that they publish. If doing something ‘unethical’ is what it takes to get people to wake up, then maybe the publication status is a lesser concern.

  • Knock_Knock_Lemmy_In@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 days ago

    The key result

    When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters

    • thanksforallthefish@literature.cafe
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      While that is indeed what was reported, we and the researchers will never know if the posters with shifted opinions were human or in fact also AI bots.

      The whole thing is dodgy for lack of controls, this isn’t science it’s marketing

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      If they were personalized wouldn’t that mean they shouldn’t really receive that many upvotes other than maybe from the person they were personalized for?

      • the_strange@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        I would assume that people in a similar demographics are interested in similar topics. Adjusting the answer to a person within a demographic would therefore adjust it to all people within that demographic and interested in that specific topic.

        Or maybe it’s just the nature of the answer being more personal that makes it more appealing to people in general, no matter their background.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        Their success metric was to get the OP to award them a ‘Delta’, which is to say that the OP admits that the research bot comment changed their view. They were not trying to farm upvotes, just to get the OP to say that the research bot was effective.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    69
    arrow-down
    1
    ·
    2 days ago

    There’s no guarantee anyone on there (or here) is a real person or genuine. I’ll bet this experiment has been conducted a dozen times or more but without the reveal at the end.

        • Septimaeus@infosec.pub
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 days ago

          Hello, this is John Cleese. If you doubt that this is the real John Cleese, here is my mother to confirm that I am, in fact, me. Mother! Am I me?

          Oh yes!

          There you have it. I am me.

            • Septimaeus@infosec.pub
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              Now look here! I was invited to speak with the very real, very human patrons of this fine establishment, and I’ll not have you undermining my efforts to fulfill that obligation!

        • Forester@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          1 day ago

          If you think that, the US is the only country that does this. I have many, many waterfront properties in the Sahara desert to sell you

          • Bloomcole@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            edit-2
            1 day ago

            You know I never said that, only that they never mention or can admit that.
            The american bots or online operatives always need to start crying about Russian or Chinese interference on any unrelated subject?
            Like this Shakleford here, who admits he’s worked for the fascist imperialist warcriminal state.
            I’ve seen plenty of US bootlicker bots/operatives and hasbara genocider scum. I can smell them from far.
            Not so much Chinese or Russians.

            • Forester@pawb.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              Well my friend, if you can’t smell the shit you should probably move away from the farm. Russian and Chinese has a certain scent to it. The same with American. Sounds like you’re just nose blind.

              • Bloomcole@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 day ago

                I know anything said online that goes against the western narrative immediately gets slandered: ‘Russian bots’, ‘100+ social credit’ and that lame BS.
                Paranoid delusional Pavlovian reflexes induced by western propaganda.
                Incapable of fathoming people have another opinion, they must be paid!
                If that’s the mindset hen you will see indeed a lot of those.
                The most obvious ones to spot are definitely the Hasbara types, same pattern and vocab, and really bad at what they do.

                • Forester@pawb.social
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 day ago

                  I mean that’s just like your opinion man.

                  However, there are for a fact government assets promoting those opinions and herding those clueless people. What a lot of people failed to realize is that this isn’t a 2v1 or even a 3v1 fight. This is an international free-for-all with upwards of 45 different countries getting in on the melee.

    • inlandempire@jlai.lu
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      I’m sorry but as a language model trained by OpenAI, I feel very relevant to interact - on Lemmy - with other very real human beings

    • dzsimbo@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      There’s no guarantee anyone on there (or here) is a real person or genuine.

      I’m pretty sure this isn’t a baked-in feature of meatspace either. I’m a fan of solipsism and Last Thursdayism personally. Also propaganda posters.

      The CMV sub reeked of bot/troll/farmer activity, much like the amitheasshole threads. I guess it can be tough to recognize if you weren’t there to see the transition from authentic posting to justice/rage bait.

      We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen. What happens when robots pass the imitation game?

      • tamman2000@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        22 hours ago

        I think the reddit user base is shifting too. It’s less “just the nerds” than it used to be. The same thing happened to Facebook. It fundamentally changed when everyone’s mom joined…

      • pimento64@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        1 day ago

        We’re still in the uncanny valley, but it seems that we’re climbing out of it. I’m already being ‘tricked’ left and right by near perfect voice ai and tinkered with image gen

        Skill issue

      • Maeve@kbin.earth
        link
        fedilink
        arrow-up
        0
        ·
        2 days ago

        I’m conflicted by that term. Is it ok that it’s been shortened to “glow”?

        • max_dryzen@mander.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 days ago

          Conflict? A good image is a good image regardless of its provenance. And yes 2020s era 4chan was pretty much glowboy central, one look at the top posts by country of origin said as much. It arguably wasn’t worth bothering with since 2015

    • M137@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      20 hours ago

      Dozens? That’s like saying there are hundreds of ants on earth. I’m very comfortable saying it’s hundreds, thousands, tens of thousands. And I wouldn’t be surprised if it’s hundreds of thousands of times.

    • conicalscientist@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 day ago

      I don’t know what you have in mind but the founders originally used bots to generate activity to make the site look popular. Which begs the question. What was really the root reddit cultures. Was it the bots following human activity to bolster it. Or were the humans merely following what the founders programmed the bots to post.

      One things for sure, reddit has always been a platform of questionable integrity.

      • FourWaveforms@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        24 hours ago

        They’re banning 10+ year accounts over trifling things and it’s got noticeably worse this year. The widespread practice of shadowbanning makes it clear that they see users as things devoid of any inherent value, and that unlike most corporations, they’re not concerned with trying to hide it.

      • CBYX@feddit.org
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        2 days ago

        Not sure how everyone hasn’t expected Russia has been doing this the whole time on conservative subreddits…

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          4
          ·
          2 days ago

          Russia are every bit as active in leftist groups whipping them up into a frenzy too. There was even a case during BLM where the same Russian troll farm organised both a protest and its counter-protest. Don’t think you’re immune to being manipulated to serve Russia’s long-term interests just because you’re not a conservative.

          They don’t care about promoting right-wing views, they care about sowing division. They support Trump because Trump sows division. Their long-term goal is to break American hegemony.

          • Madzielle@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            There have a been a few times over the last few years, that my “bullshit- this is an extemist plant/propaganda” meter has gone off for left leaning individuals.

            Meaning these comments/videos are aimed to look like they are left folks, but are meant to make the left look bad/extremist in order to push people from the working class movements.

            Im truly a layman, but you just know its out there. The goal is indeed to divide us, and everyone should be suspect of everything the see on the Internet and do proper vetting of their sources.

          • CBYX@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 days ago

            The difference is in which groups are consequentially making it their identity and giving one political party carte blanche to break American politics and political norms (and national security orgs).

            100% agree though.

          • aceshigh@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 day ago

            Yup. We’re all susceptible to joining a cult. No one willingly joins a cult, their group slowly morphs into one.

        • Geetnerd@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          2 days ago

          Those of us who are not idiots have known this for a long time.

          They beat the USA without firing a shot.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Mainly I didn’t really expect that since the old methods of propaganda before AI use worked so well for the US conservatives’ self-destructive agenda that it didn’t seem necessary.

        • seeigel@feddit.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          Or somebody else is doing the manipulation and is successfully putting the blame on Russia.

  • LovingHippieCat@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    ·
    edit-2
    2 days ago

    If anyone wants to know what subreddit, it’s r/changemyview. I remember seeing a ton of similar posts about controversial opinions and even now people are questioning Am I Overreacting and AITAH a lot. AI posts in those kind of subs are seemingly pretty frequent. I’m not surprised to see it was part of a fucking experiment.

    • Refurbished Refurbisher@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      AIO and AITAH are so obviously just AI posting. It’s all just a massive circlejerk of AI and people who don’t know they’re talking to AI agreeing with each other.

    • eRac@lemmings.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      This was comments, not posts. They were using a model to approximate the demographics of a poster, then using an LLM to generate a response counter to the posted view tailored to the demographics of the poster.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        You’re right about this study. But, this research group isn’t the only one using LLMs to generate content on social media.

        There are 100% posts that are bot created. Do you ever notice how, on places like Am I Overreacting or Am I the Asshole that a lot of the posts just so happen to hit all of the hot button issues all at once? Nobody’s life is that cliche, but it makes excellent engagement bait and the comment chain provides a huge amount of training data as the users argue over the various topics.

        I use a local LLM, that I’ve fine tuned, to generate replies to people, who are obviously arguing in bad faith, in order to string them along and waste their time. It’s setup to lead the conversation, via red herrings and other various fallacies to the topic of good faith arguments and how people should behave in online spaces. It does this while picking out pieces of the conversation (and from the user’s profile) in order to chastise the person for their bad behavior. It would be trivial to change the prompt chains to push a political opinion rather than to just waste a person/bot’s time.

        This is being done as a side project, on under $2,000 worth of consumer hardware, by a barely competent progammer with no training in Psychology or propaganda. It’s terrifying to think of what you can do with a lot of resources and experts working full-time.

  • ImplyingImplications@lemmy.ca
    link
    fedilink
    English
    arrow-up
    20
    ·
    2 days ago

    The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.

      • TimewornTraveler@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        21 hours ago

        I mean that’s the point of research: to demonstrate real world problems and put it in more concrete terms so we can respond more effectively

    • ArchRecord@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      To be fair, I do believe their research was based on how convincing it was compared to other Reddit commenters, rather than say, an actual person you’d normally see doing the work for a government propaganda arm, with the training and skillset to effectively distribute propaganda.

      Their assessment of how “convincing” it was seems to also have been based on upvotes, which if I know anything about how people use social media, and especially Reddit, are often given when a comment is only slightly read through, and people are often scrolling past without having read the whole thing. The bots may not have necessarily optimized for convincing people, but rather, just making the first part of the comment feel upvote-able over others, while the latter part of the comment was mostly ignored. I’d want to see more research on this, of course, since this seems like a major flaw in how they assessed outcomes.

      This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        This, of course, doesn’t discount the fact that AI models are often much cheaper to run than the salaries of human beings.

        And the fact that you can generate hundreds or thousands of them at the drop of a hat to bury any social media topic in highly convincing ‘people’ so that the average reader is more than likely going to read the opinion that you’re pushing and not the opinion of the human beings.

  • TwinTitans@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    2 days ago

    Like the 90s/2000s - don’t put personal information on the internet, don’t believe a damned thing on it either.

    • mic_check_one_two@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      2 days ago

      Yeah, it’s amazing how quickly the “don’t trust anyone on the internet” mindset changed. The same boomers who were cautioning us against playing online games with friends are now the same ones sharing blatantly AI generated slop from strangers on Facebook as if it were gospel.

      • Serinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        4
        ·
        2 days ago

        Back then it was just old people trying to groom 16 year olds. Now it’s a nation’s intelligence apparatus turning our citizens against each other and convincing them to destroy our country.

        I wholeheartedly believe they’re here, too. Their primary function here is to discourage the left from voting, primarily by focusing on the (very real) failures of the Democrats while the other party is extremely literally the Nazi party.

        • queermunist she/her@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          9
          ·
          edit-2
          2 days ago

          Everyone who disagrees with you is a bot, probably from Russia. You are very smart.

          Do you still think you’re going to be allowed to vote for the next president?

          • Serinus@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            2 days ago

            Everyone who disagrees with you is a bot

            I mean that’s unironically the problem. When there absolutely are bots out here, how do you tell?

            • queermunist she/her@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              5
              ·
              2 days ago

              Sure, but you seem to be under the impression the only bots are the people that disagree with you.

              There’s nothing stopping bots from grooming you by agreeing with everything you say.

          • EldritchFeminity@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 day ago

            Everyone who disagrees with you is a bot, probably from Russia. You are very smart.

            Where did they say that? They just said bots in general. It’s well known that Russia has been running a propaganda campaign across social media platforms since at least the 2016 elections (just like the US is doing on Russian and Chinese social media, I’m sure. They do it on Americans as well. We’re probably the most propangandized country on the planet), but there’s plenty of incentive for corpo bots to be running their own campaigns as well.

            Or are you projecting for some reason? What do you get from defending Putin?

        • supersquirrel@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 day ago

          Social media didn’t break people’s brains, the massive influx of conservative corporate money to distort society and keep existential problems from being fixed until it is too late and push people resort to to impulsive, kneejerk responses because they have been ground down to crumbs… broke people’s brains.

          If we didn’t have social media right now and all of this was happening, it would be SO much worse without younger people being able to find news about the Palestinian Genocide or other world news that their country/the rich conservatives around them don’t want them to read.

          It is what those in power DID to social media that broke people’s brains and it is why most of us have come here to create a social network not being driven by those interests.

      • I feel like I learned more about the Internet and shit from Gen X people than from boomers. Though, nearly everyone on my dad’s side of the family, including my dad (a boomer), was tech literate, having worked in tech (my dad is a software engineer) and still continue to not be dumb about tech… Aside from thinking e-greeting cards are rad.

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Aside from thinking e-greeting cards are rad.

          As a late Gen-X/early Millennial, e-greeting cards are rad.

          Kids these days don’t know how good they have it with their gif memes and emoji-supporting character encodings… get off my lawn you young whippersnappers!

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I never liked the “don’t believe anything you read on the internet” line, it focuses too much on the internet without considering that you shouldn’t believe anything you read or hear elsewhere either, especially on divisive topics like politics.

      You should evaluate information you receive from any source with critical thinking, consider how easy it is to make false claims (e.g. probably much harder for a single source if someone claims that the US president has been assassinated than if someone claims their local bus was late that one unspecified day at their unspecified location), who benefits from convincing you of the truth of a statement, is the statement consistent with other things you know about the world,…

  • teamevil@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    1 day ago

    Holy Shit… This kind of shit is what ultimately broke Tim(very closely ralated to ted) kaczynski… He was part of MKULTRA research while a student at Harvard, but instead of drugging him, they had a debater that was a prosecutor pretending to be a student… And would just argue against any point he had to see when he would break…

    And that’s how you get the Unabomber folks.

    • Geetnerd@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      I don’t condone what he did in any way, but he was a genius, and they broke his mind.

      Listen to The Last Podcast on the Left’s episode on him.

      A genuine tragedy.

      • teamevil@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 day ago

        You know when I was like 17 and they put out the manifesto to get him to stop attacking and I remember thinking oh it’s got a few interesting points.

        But I was 17 and not that he doesn’t hit the nail on the head with some of the technological stuff if you really step back and think about it and this is what I couldn’t see at 17 it’s really just the writing of an incell… He couldn’t communicate with women had low self-esteem and classic nice guy energy…