• Th4tGuyII@fedia.io
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    Image manipulation has always been a thing, and there are ways to counter it…

    But we already know that a shocking amount of people will simply take what they see at face value, even if it does look suspicious. The volume of AI generated misinformation online is already too damn high, without it getting more new strings in it’s bow.

    Governments don’t seem to be anywhere near on top of keeping up with these AI developments either, so by the time the law starts accounting for all of this, the damage will be long done already.

      • yamanii@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Yep, this is a problem of volume of misinformation, the truth can just get buried by one single person generating thousands of fake photos, it’s really easy to lie, it’s really time consuming to fact check.

    • RubberDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      On our vacation 2 weeks ago my wife made an awesome picture just with one guy annoyingly in the background. She just tucked him and clicked the button… poof gone, perfect photo.

  • stoy@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    TL;DR: The new Reimage feature on the Google Pixel 9 phones is really good at AI manipulation, while being very easy to use. This is bad.

        • kernelle@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          2
          ·
          1 month ago

          Photoshop has existed for a bit now. So incredibly shocking it was only going to get better and easier to do, move along with the times oldtimer.

          • ggppjj@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 month ago

            Photoshop requires time and talent to make a believable image.

            This requires neither.

              • ggppjj@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                You said “but” like it invalidated what I said, instead of being a true statement and a non sequitur.

                You aren’t wrong, and I don’t think that changes what I said either.

                • kernelle@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  2
                  ·
                  1 month ago

                  Lmao, “but” means your statement can be true and irrelevant at the same time. From the day photoshop could fool people lawyers have been trying to mark any image as faked, misplaced or out of context.

                  When you just now realise it’s an issue, that’s your problem. People can’t stop these tools from existing, so like, go yell at a cloud or something.

          • sorghum@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 month ago

            Well yeah, I’m not concerned with its ease of use nowadays. I’m more concerned with the computer forensics experts not being able to detect a fake for which Photoshop has always been detectable.

  • FinishingDutch@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    I work at a newspaper as both a writer and photographer. I deal with images all day.

    Photo manipulation has been around as long as the medium itself. And throughout the decades, people have worried about the veracity of images. When PhotoShop became popular, some decried it as the end of truthful photography. And now here’s AI, making things up entirely.

    So, as a professional, am I worried? Not really. Because at the end of the day, it all comes down to ‘trust and verify when possible’. We generally receive our images from people who are wholly reliable. They have no reason to deceive us and know that burning that bridge will hurt their organisation and career. It’s not worth it.

    If someone was to send us an image that’s ‘too interesting’, we’d obviously try to verify it through other sources. If a bunch of people photographed that same incident from different angles, clearly it’s real. If we can’t verify it, well, we either trust the source and run it, or we don’t.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      If a bunch of people photographed that same incident from different angles, clearly it’s real.

      I don’t think you can assume this anymore.

      • Jiggle_Physics@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Yeah photo editing software, and AI, can be used to create images from different points of view, mimicking different styles, and qualities, of different equipment, and make adjustments for continuity from perspective, to perspective. Unless we have way for something, like AI, to be able to identify fabricated images, using some sort of encoding fingerprint, or something, it won’t be forever until they are completely indiscernible from the genuine article. You would have to be able to prove a negative, that the person who claims to have taken the photo could not have, in order to do so. This, as we know, is far more difficult than current discretionary methods.

        • FinishingDutch@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          The point I’m making isn’t really about the ability to fake specific angles or the tech side of it. It’s about levels of trust and independent sources.

          It’s certainly possible for people to put up some fake accounts and tweet some fake images of seperate angles. But I’m not trusting random accounts on Twitter for that. We look at sources like AP, Reuters, AFP… if they all have the same news images from different angles, it’s trustworthy enough for me. On a smaller scale, we look at people and sources we trust and have vetted personally. People with longstanding relationships. It really does boil down to a ‘circle of trust’: if I don’t know a particular photographer, I’ll talk to someone who can vouch for them based on past experiences.

          And if all else fails and it’s just too juicy not to run? We’d slap a big 'ole ‘this image has not been verified’ on it. Which we’ve never had to do so far, because we’re careful with our sources.

          • Jiggle_Physics@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            Sorry, but if traditional news media loses much more ground to “alternative fact” land, and other reasons for decline vs the new media, I have zero faith they won’t just give in and go with it. I mean, if they are gonna fail anyway, why not at least see if they can get themselves a slice of that pie.

    • uienia@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 month ago

      Personally I think this kind of response shows how not ready we are, because it is grounded in the antiquated assumption that it is just more of the same old instead of a complete revolution in both the quality and quantity of fakery going to happen.

    • golli@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 month ago

      Photo manipulation has been around as long as the medium itself. And throughout the decades, people have worried about the veracity of images. When PhotoShop became popular, some decried it as the end of truthful photography. And now here’s AI, making things up entirely.

      I actually think it isn’t the AI photo or video manipulation part that makes it a bigger issue nowadays (at least not primarily), but the way in which they are consumed. AI making things easier is just another puzzle piece in this trend.


      Information volume and speed has increased dramatically, resulting in an overflow that significantly shortens the timespan that is dedicated to each piece of content. If i slowly read my sunday newspaper during breakfast, then i’ll give it much more attention, compared to scrolling through my social media feed. That lack of engagement makes it much easier for missinformation to have the desired effect.

      There’s also the increased complexity of the world. Things can on the surface seem reasonable and true, but have knock on consequences that aren’t immediately apparent or only hold true within a narrow picture, but fall appart once viewed from a wider perspective. This just gets worse combined with the point above.

      Then there’s the downfall of high profile leading newsoutlets in relevance and the increased fragmentation of the information landscape. Instead of carefully curated and verified content, immediacy and clickbait take priority. And this imo also has a negative effect on those more classical outlets, which have to compete with it.

      You also have increased populism especially in politics and many more trends, all compounding on the same issue of missinformation.

      And even if caught and corrected, usually the damage is done and the correction reaches far fewer people.

    • ikidd@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Unfortunately, newspapers and news sources like it that verify information reasonably well aren’t where most people get their info from anymore, and IMO, are unlikely to be around in a decade. It’s become pretty easy to get known misinformation widely distributed and refuting it does virtually nothing to change popular opinion on these stories anymore. This is only going to get worse with tools like this.

  • WoahWoah@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    This is a hyperbolic article to be sure. But many in this thread are missing the point. It’s not that photo manipulation is new.

    It’s the volume and quality of photo manipulation that’s new. “Flooding the zone with bullshit,” i.e. decreasing the signal-to-noise ratio, can have a demonstrable social effect.

  • yamanii@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    These photoshop comments are missing the point that it’s just like art, a good edit that can fool everyone needs someone that practiced a lot and has lots of experience, now even the lazy asses on the right can fake it easily.

    • Drewelite@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I think this comment misses the point that even one doctored photo created by a team of highly skilled individuals can change the course of history. And when that’s what it takes, it’s easier to sell it to the public.

      What matters is the source. What we’re being forced to reckon with now is: the assumption that photos capture indisputable reality has never and will never be true. That’s why we invented journalism. Ethically driven people to investigate and be impartial sources of truth on what’s happening in the world. But we’ve neglected and abused the profession so much that it’s a shell of what we need it to be.

  • randy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Relevant XKCD. Humans have always been able to lie. Having a single form of irrefutable proof is the historical exception, not the rule.

    • samus12345@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Regarding that last panel, why would multiple people go through the trouble of carving lies about Ea-Nasir’s shitty copper? And even if they did, why would he keep them? No, his copper definitely sucked.

    • gandalf_der_12te@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      interesting thought. we haven’t had photos in history, and people didn’t need them. also, we’ve been able to produce text deepfakes all throughout history (and people actually did that - a lot) and somehow, humanity still survived and made progress. maybe we should question our assumptions whether we really need a medium to communicate absolute truth.

      • Drewelite@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        If you’re getting your truth from somewhere you don’t trust, you’ve already lost the plot. Having a medium to convey absolute truth is NOT the exception, because it never existed. Not with first hand accounts, not with photos, not with videos. Anything, from its inception, has been able to be faked by someone motivated enough.

        What we need is an industry of independent ethically driven individuals to investigate and be a trusted source of truth on the world’s important events. Then they can release journals about their findings. We can call them journalers or something, I don’t know, I don’t have all the answers. Too bad nothing like that exists when we need it most 🥲

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          What we need is distribution of power. Power acts upon information. There was that weird idea that with solid information there’s no need to distribute power. When people say “due process”, they usually mean that. This wasn’t true anyway.

          Information is still fine, people lie and have always lied, humanity has always relied upon chains and webs of trust.

          The issue is centralized power forcing you to walk their paths.

  • hperrin@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    We literally lived for thousands of years without photos. And we’ve lived for 30 years with Photoshop.

    • Squizzy@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      The article takes a doomed tone for sure but the reality is we know how dangerous and prolific misinformation is.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        The Nazis based their entire philosophy on misinformation, and they did this in a world that predated computers. I don’t actually think there’s going to be a problem here all of the issues that the people are claiming exist have always been possible and not only possible but actually done in many cases.

        AI is just the tool by which misinformation will now be spread but if AI didn’t exist the misinformation would just find another path.

        • Sineljora@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          I disagree with your point that it wouldn’t get worse. The Nazi example was in fact much worse for it’s time because of a new tool they called the “eighth great power”.

          Goebbels used radio, which was new at the time, and subsidized radios for German citizens. AI is new, faster and more compelling than radio, not limited to a specific media type, and everyone already has receivers.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Except it was way harder to do.

      Now call me a “ableist, technophobic, luddite”, that wants to ruin the chance of other people making GTA-like VRMMORPGs from a single line of prompt!

  • cley_faye@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    This is only a threat to people that took random picture at face value. Which should not have been a thing for a long while, generative AI or not.

    The source of an information/picture, as well as how it was checked has been the most important part of handling online content for decades. The fact that it is now easier for some people to make edits does not change that.

  • Ilovethebomb@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Meh, those edited photos could have been created in Photoshop as well.

    This makes editing and retouching photos easier, and that’s a concern, but it’s not new.

    • FlihpFlorp@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Something I heard in the photoshop VS ai argument is it makes an already existing process much faster and almost anyone can do it which increases the shear amount that one person or a group could make almost how a printing press made the production of books so much faster (if you’re in to history)

      I’m too tired to take a stance so I’m just sharing some arguments I’ve heard

  • Echo Dot@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Okay so it’s the verge so I’m not exactly expecting much but seriously?

    No one on Earth today has ever lived in a world where photographs were not the linchpin of social consensus

    People have been faking photographs basically since day one, with techniques like double exposure. Also even more sophisticated photo manipulation has been possible with Photoshop which has existed for decades.

    There’s a photo of me taken in the '90s on thunder mountain at Disneyland which has been edited to look like I’m actually on a mountainside rather than in a theme park. I think we can deal with fakeable photographs the only difference here is the process is automatable which honestly doesn’t make even the blindest bit of difference. It’s quicker but so what.

    • TheFriar@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      It used to take professionals or serious hobbyists to make something fake look believable. Now it’s at the tip of everyone’s fingers. Fake photos were already a smaller issue, but this very well could become a tidal wave of fakes trying to grab attention.

      Think about how many scammers there are. Think about how many horny boys there are. Think about how much online political fuckery goes around these days. When believable photographs of whatever you want people to believe are at the tips of anyone’s fingers, it’s very, very easy to start a wildfire of misinformation. And think about the young girls being tormented in middle school and high school. And all the scammable old people. And all the fascists willing to use any tool at their disposal to sow discord and hatred.

      It’s not a nothing problem. It could very well become a torrent of lies.

      • rottingleaf@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        Come on, science fiction had similar technologies to fake things since 40s. The writing was on the wall.

        It didn’t really work outside of authors’ and readers’ imagination, but the only reason we’re scared is that we’re forced into centralized hierarchical systems in which it’s harder to defend.

        • TheFriar@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 month ago

          I mean, sure, deception as a concept has always been around. But let me just put it this way:

          How many more scam emails, scam texts, how many more data leaks, conspiracy theories are going around these days? All of these things always existed. The Nigerian prince scam. That one’s been around forever. The door-to-door salesman, that one’s been around forever. The snake oil charlatan. Scams and lies have been around since we could communicate, probably. But never before have we been bombarded with them like we are today. Before, it took a guy with a rotary phone and a phone book a full day to try to scam 100 people. Now 100 calls go out all at once with a different fake phone number for each, spoofed to be as close to the recipient’s number as possible.

          The effort input needed for these things have dropped significantly with new tech, and their prevalence skyrocketed. It’s not a new story. In fact, it’s a very old story. It’s just more common and much easier, so it’s taken up by more people because it’s more lucrative. Why spend all of your time trying to hack a campaign’s email (which is also still happening), when you can make one suspicious picture and get all of your bots to get it trending so your company gets billions in tax breaks? All at the click of a button. Then send your spam bots to call millions of people a day to spread the information about the picture, and your email bots to spam the picture to every Facebook conspiracy theorist. All in a matter of seconds.

          This isn’t a matter of “what if.” This is kind of just the law of scams. It will be used for evil. No question. And it does have an effect. You can’t have random numbers call you anymore without you immediately expecting their spam. Soon, you won’t be able to get photo evidence without immediately thinking it might be fake. Water flows downhill, new tech gets used for scams. The like a law of nature at this point.

          • rottingleaf@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 month ago

            Wise people still teach their children (and remind themselves) not to talk to strangers, say “no” if not sure, mind their own business because their attention and energy are not infinite, and trust only family.

            You can’t have random numbers call you anymore without you immediately expecting their spam.

            You’d be wary of people who are not your neighbors in the Middle Ages. Were you a nobleman, you’d still mostly talk to people you knew since childhood, yours or theirs, and the rare new faces would be people you’ve heard about since childhood, yours or theirs.

            It’s not a new danger. Even qualitatively - the change for a villager coming to a big city during the industrial revolution was much more radical.

            • TheFriar@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 month ago

              That’s exactly what I meant when I said:

              It’s not a new story. In fact, it’s a very old story.

              And you just kinda proved my point. As time has gone on, the great of deception has grown with new technology. This is just the latest iteration. And every new one has expanded the chances/danger exponentially.

              • rottingleaf@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 month ago

                What I really meant is that humanity is a self-regulating system. This disturbance will be regulated just as well as those other ones.

                The unpleasant thing is that the example I’ve given involved lots of new power being created, while our disturbance is the opposite - people\forces already having power desperately trying to preserve their relative weight, at the cost of preventing new power being created.

                But we will see if they’ll succeed. After all, the very reason they are doing this is because they can’t create power, and that is because their institutional understanding is lacking, and this in turn means that they are not in fact doing what they think they are. And by forcing those who can create power to the fringe, they are accelerating the tendencies for relief.

                • TheFriar@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 month ago

                  I don’t think this is the power redistribution you’re implying it is. I’m not actually sure what you mean by that. The power to create truths? To spread propaganda? I can’t think of any other power this tech would redistribute. Would you mind explaining?

  • JackGreenEarth@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    People can write things that aren’t true! Oh no, now we can’t trust trustworthy texts such as scientific papers that have undergone peer review!

    • BalooWasWahoo@links.hackliberty.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I mean… have you seen the scathing reports on scientific papers, psychology especially? Peer review doesn’t catch liars. It catches bad experimental design, and it sometimes screens out people the reviewers don’t like. Replication can catch liars sometimes, but even in the sciences that are ‘hard’ it is rare to see replication because that doesn’t bring the grant money in.

  • conciselyverbose@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I think this is a good thing.

    Pictures/video without verified provenance have not constituted legitimate evidence for anything with meaningful stakes for several years. Perfect fakes have been possible at the level of serious actors already.

    Putting it in the hands of everyone brings awareness that pictures aren’t evidence, lowering their impact over time. Not being possible for anyone would be great, but that isn’t and hasn’t been reality for a while.

    • reksas@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 month ago

      While this is good thing, not being able to tell what is real and what is not would be disaster. What if every comment here but you were generated by some really advanced ai? What they can do now will be laughable compared to what they can do many years from now. And at that point it will be too late to demand anything to be done about it.

      Ai generated content should have somekind of tag or mark that is inherently tied to it that can be used to identify it as ai generated, even if only part is used. No idea how that would work though if its even possible.

  • LucidNightmare@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 month ago

    There was actually a user on Lemmy that asked if the original photo for the massacre was AI. It hadn’t occurred to me that people who never heard of the 1989 Tiananmen Square protests and massacre would find the image and question if it was real or not.

    A very sad sight, a very sad future.

    • HiddenLychee@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Photoshop has existed for years. It’s no different than a student in 2010 being shocked at the horrors of man and trying to figure out how it could be faked with a computer. People have denied the Holocaust for generations!

      • uienia@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        This argument keeps missing that it is not only the quality but mainly the quantity of fakes which is going to be the problem. The complete undermining of trust in photographic evidence is seen as a good thing for so many nefarious vested interests, that this is an aim they will actively strive for.

  • adam_y@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    It’s always been about context and provenance. Who took the image? Are there supporting accounts?

    But also, it has always been about the knowlege that no one… Absolutely no one… Does lines of coke from a woven mat floor covering.

    don't do drugs kids.

    • mctoasterson@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 month ago

      Lots of obviously fake tipoffs in this one. The overall scrawny bitch aesthetic, the fact she is wearing a club/bar wrist band, the bottle of Mom Party Select™ wine, and the persons thumb/knee in the frame… All those details are initially plausible until you see the shitty AI artifacts.

      • Ledivin@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago

        Lots of obviously fake tipoffs in this one. The overall scrawny bitch aesthetic, the fact she is wearing a club/bar wrist band, the bottle of Mom Party Select™ wine, and the persons thumb/knee in the frame… All those details are initially plausible until you see the shitty AI artifacts.

        This is an AI-edited photo, and literally every “artifact” you pointed out is present in the original except for the wine bottle. You’re not nearly as good as spotting fakes as you think you are - nobody is

      • Ilovethebomb@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        All the details you just mentioned are also present in the unaltered photo though. Only the “drugs” are edited in.

        Didn’t read the article, did you?

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    We’ve had fake photos for over 100 years at this point.

    https://en.wikipedia.org/wiki/Cottingley_Fairies

    Maybe it’s time to do something about confirming authenticity, rather than just accepting any old nonsense as evidence of anything.

    At this point anything can be presented as evidence, and now can be equally refuted as an AI fabrication.

    We need a new generation of secure cameras with internal signing of images and video (to prevent manipulation), built in LIDAR (to make sure they’re not filming a screen), periodic external timestamps of data (so nothing can be changed after the supposed date), etc.

    • Richard@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I am very opposed to this. It means surrendering all trust in pictures to Big Tech. If at some time only photos signed by Sony, Samsung, etc. are considered genuine, then photos taken with other equipment, e.g., independently manufactured cameras or image sensors, will be dismissed out of hand. If, however, you were to accept photos signed by the operating system on those devices regardless of who is the vendor, that would invalidate the entire purpose because everyone could just self-sign their pictures. This means that the only way to effectively enforce your approach is to surrender user freedom, and that runs contrary to the Free Software Movement and the many people around the world aligned with it. It would be a very dystopian world.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        It would also involve trusting those corporations not to fudge evidence themselves.

        I mean, not everything photo related would have to be like this.

        But if you wanted you photo to be able to document things, to provide evidence that could send people to prison or be executed…

        The other choice is that we no longer accept photographic, audio or video evidence in court at all. If it can no longer be trusted and even a complete novice can convincingly fake things, I don’t see how it can be used.