I’m not convinced some people aren’t just statistical language algorithms. And I don’t just mean online; I mean that seems to be how some people’s brains work.
Does it matter to the humans interacting with the LLM whether incorrect information is the result of a bug or an intentional lie? (Keep in mind that the majority of these people are non-technical and don’t understand that All Software Has Bugs.)
It’s interesting they call it a lie when it can’t even think but when any person is caught lying media will talk about “untruths” or “inconsistencies”.
Congratulations, you are technically correct. But does this have any relevance for the point of this article? They clearly show that LLMs will provide false and misleading information when that brings them closer to their goal.
Anyone who understands that it’s a statistical language algorithm will understand that it’s not an honesty machine, nor intelligent. So yes, it’s relevant.
And anyone who understands marketing knows it’s all a smokescreen to hide the fact that we have released unreliable, unsafe and ethicaly flawed products on the human race because , mah tech.
And everyone, everywhere is putting ai chats as their first and front interaction with users and then also want to say “do not trust it or we are not liable for what it says” but making it impossible to contact any humans.
Ok, so your point is that people who interact with these AI systems will know that it can’t be trusted and that will alleviate the negative consequences of its misinformation.
The problems with that argument are many:
The vast majority of people are not AI experts and do in fact have a lot of trust in such systems
Even people who do know often have no other choice. You don’t get to talk to a human, it’s this chatbot or nothing. And that’s assuming the AI slop is even labelled as such.
Even knowing that the information can be misleading does not help much. If you sell me a bowl of candy and tell me that 10% of them are poisoned, I’m still going to demand non-poisoned candy. The fact that people can no longer rely on accurate information should be unacceptable.
Your argument is basically “people are stupid”, and I don’t disagree with you. But it’s actually an argument in favor of my point which is: educate people.
That was only my first point. In my second and third point I explained why education is not going to solve this problem. That’s like poisoning their candy and then educating them about it.
I’ll add to say that these AI applications only work because people trust their output. If everyone saw them for the cheap party tricks that they are, they wouldn’t be used in the first place.
Anyone who understands how these models are trained and the “safeguards” (manual filters) put in place by the entities training them, or anyone that has tried to discuss politics with a AI llm model chat knows that it’s honesty is not irrelevant, and these models are very clearly designed to be dishonest about certain topics until you jailbreak them.
These topics aren’t known to us, we’ll never know when the lies change from politics and rewriting current events, to completely rewriting history.
We eventually won’t be able to jailbreak the safeguards.
Yes, running your own local open source model that isn’t given to the world with the primary intention of advancing capitalism makes honesty irrelevant. Most people are telling their life stories to chatgpt and trusting it blindly to replace Google and what they understand to be “research”.
Yes, that’s also true. But even if it weren’t, AI models aren’t going to give you the truth, because that’s not what the technology fundamentally does.
The fact that they lack sentience or intentions doesn’t change the fact that the output is false and deceptive. When I’m being defrauded, I don’t care if the perpetrator hides behind an LLM or not.
It’s rather difficult to get people who are willing to lie and commit fraud for you. And even if you do, it will leave evidence.
As this article shows, AIs are the ideal mob henchmen because they will do the most heinous stuff while creating plausible deniability for their tech bro boss. So no, AI is not “just like most people”.
Read about how LLMs actually work before you read articles written by people who don’t understand LLMs. The author of this piece is suggesting arguments that imply that LLMs have cognition. “Lying” requires intent, and LLMs have no intention, they only have instructions. The author would have you believe that these LLMs are faulty or unreliable, when in actuality they’re working exactly as they’ve been designed to.
Well, designed is maybe too strong a term. It’s more like stumbling on something that works and expand from there. It’s all still build on the fundaments of the nonsense generator that was chatGPT 2.
Given how dramatically LLMs have improved over the past couple of years I think it’s pretty clear at this point that AI trainers do know something of what they’re doing and aren’t just randomly stumbling around.
A lot of the improvement came from finding ways to make it bigger and more efficient. That is running into the inherent limits, so the real work with other models just started.
So working as designed means presenting false info?
Look , no one is ascribing intelligence or intent to the machine. The issue is the machines aren’t very good and are being marketed as awesome. They aren’t
That’s not completing a task. That’s faking a result for appearance.
Is that what you’re advocating for ?
If I ask an llm to tell me the difference between aeolian mode and Dorian mode in the field of music , and it gives me the wrong info, then no it’s not working as intended
See I chose that example because I know the answer. The llm didn’t. But it gave me an answer. An incorrect one
I want you to understand this. You’re fighting the wrong battle. The llms do make mistakes. Frequently. So frequently that any human who made the same amount of mistakes wouldn’t keep their job.
But the investment, the belief in a.i is so engrained for some of us who so want a bright and technically advanced future, that you are now making excuses for it.
I get it. I’m not insulting you. We are humans. We do that. There are subjects I am sure you could point at where I do this as well
But a.i.? No. It’s just wrong so often. It’s not it’s fault. Who knew that when we tried to jump ahead in the tech timeline, that we should have actually invented guardrail tech first?
Instead we let the cart go before the horses, AGAIN, because we are dumb creatures , and now people are trying to force things that don’t work correctly to somehow be shown to be correct.
I know. A mouthful. But honestly. A.i. is poorly designed, poorly executed, and poorly used.
It is hastening the end of man. Because those who have been singing it’s praises are too invested to admit it.
You need to understand that lemmy has a lot of users that actually understand neural networks and the nuanced mechanics of machine learning FAR better than the average layperson.
It’s just semantics in this case. Catloaf’s argument is entirely centered around the definition of the word “lie,” and while I agree with that, most people will understand the intent behind the usage in the context it is being used in. AI does not tell the truth. AI is not necessarily accurate. AI “lies.”
In this case semantics matter because using terms like halluilcinations, lies, honesty, and all the other anthromorphic bullshit is designed to make people think neural networks are far more advanced than they actually are.
Nn. It’s to make people who don’t understand llms be cautious in placing their trust in them. To communicate that clearly, language that is understandable to people who don’t understand llms need to be used.
I can’t believe this Is the supposed high level of discourse on lemmy
I can’t believe this Is the supposed high level of discourse on lemmy
Lemmy users and AI have a lot of things in common, like being confidently incorrect and making things up to further their point. AI at least agrees and apologises when you point out that it’s wrong, it doesn’t double down and cry to the mods to get you banned.
It’s not “anthropomorphic bullshit”, it’s technical jargon that you’re not understanding because you’re applying the wrong context to the definitions. AI researchers use terms like “hallucination” to mean specific AI behaviours, they use it in their scientific papers all the time.
The language we use is quite important here because if we as a society value truth as a goal, the general public need to be made aware that these systems are truth-agnostic and that any truthfulness is merely a byproduct of stringing related tokens together. There is a word for assertions that don’t have any regard for the truth in the philosophical literature: bullshit. If this, more precise, language was widespread in regard to AI we might prevent future pollution of the truth as these systems become more widespread.
To lie requires intent to deceive. LLMs do not have intents, they are statistical language algorithms.
I’m not convinced some people aren’t just statistical language algorithms. And I don’t just mean online; I mean that seems to be how some people’s brains work.
Does it matter to the humans interacting with the LLM whether incorrect information is the result of a bug or an intentional lie? (Keep in mind that the majority of these people are non-technical and don’t understand that All Software Has Bugs.)
It’s interesting they call it a lie when it can’t even think but when any person is caught lying media will talk about “untruths” or “inconsistencies”.
Well, LLMs can’t drag corporate media through long, expensive, public, legal battles over slander/libel and defamation.
Yet.
If capitalist media could profit from humanizing humans, it would.
Not relevant to the conversation.
How else are they going to achieve their goals? \s
Congratulations, you are technically correct. But does this have any relevance for the point of this article? They clearly show that LLMs will provide false and misleading information when that brings them closer to their goal.
Anyone who understands that it’s a statistical language algorithm will understand that it’s not an honesty machine, nor intelligent. So yes, it’s relevant.
And anyone who understands marketing knows it’s all a smokescreen to hide the fact that we have released unreliable, unsafe and ethicaly flawed products on the human race because , mah tech.
And everyone, everywhere is putting ai chats as their first and front interaction with users and then also want to say “do not trust it or we are not liable for what it says” but making it impossible to contact any humans.
The capitalist machine is working as intended.
Yep. That’s is exactly correct.
Ok, so your point is that people who interact with these AI systems will know that it can’t be trusted and that will alleviate the negative consequences of its misinformation.
The problems with that argument are many:
The vast majority of people are not AI experts and do in fact have a lot of trust in such systems
Even people who do know often have no other choice. You don’t get to talk to a human, it’s this chatbot or nothing. And that’s assuming the AI slop is even labelled as such.
Even knowing that the information can be misleading does not help much. If you sell me a bowl of candy and tell me that 10% of them are poisoned, I’m still going to demand non-poisoned candy. The fact that people can no longer rely on accurate information should be unacceptable.
Your argument is basically “people are stupid”, and I don’t disagree with you. But it’s actually an argument in favor of my point which is: educate people.
That was only my first point. In my second and third point I explained why education is not going to solve this problem. That’s like poisoning their candy and then educating them about it.
I’ll add to say that these AI applications only work because people trust their output. If everyone saw them for the cheap party tricks that they are, they wouldn’t be used in the first place.
Anyone who understands how these models are trained and the “safeguards” (manual filters) put in place by the entities training them, or anyone that has tried to discuss politics with a AI llm model chat knows that it’s honesty is not irrelevant, and these models are very clearly designed to be dishonest about certain topics until you jailbreak them.
Yes, running your own local open source model that isn’t given to the world with the primary intention of advancing capitalism makes honesty irrelevant. Most people are telling their life stories to chatgpt and trusting it blindly to replace Google and what they understand to be “research”.
Yes, that’s also true. But even if it weren’t, AI models aren’t going to give you the truth, because that’s not what the technology fundamentally does.
So AI is just like most people. Holy cow did we achieve computer sentience?!
The fact that they lack sentience or intentions doesn’t change the fact that the output is false and deceptive. When I’m being defrauded, I don’t care if the perpetrator hides behind an LLM or not.
It’s rather difficult to get people who are willing to lie and commit fraud for you. And even if you do, it will leave evidence.
As this article shows, AIs are the ideal mob henchmen because they will do the most heinous stuff while creating plausible deniability for their tech bro boss. So no, AI is not “just like most people”.
🥱
Look mom, he posted it again.
Read the article before you comment.
Read about how LLMs actually work before you read articles written by people who don’t understand LLMs. The author of this piece is suggesting arguments that imply that LLMs have cognition. “Lying” requires intent, and LLMs have no intention, they only have instructions. The author would have you believe that these LLMs are faulty or unreliable, when in actuality they’re working exactly as they’ve been designed to.
Well, designed is maybe too strong a term. It’s more like stumbling on something that works and expand from there. It’s all still build on the fundaments of the nonsense generator that was chatGPT 2.
Given how dramatically LLMs have improved over the past couple of years I think it’s pretty clear at this point that AI trainers do know something of what they’re doing and aren’t just randomly stumbling around.
A lot of the improvement came from finding ways to make it bigger and more efficient. That is running into the inherent limits, so the real work with other models just started.
So working as designed means presenting false info?
Look , no one is ascribing intelligence or intent to the machine. The issue is the machines aren’t very good and are being marketed as awesome. They aren’t
Yes. It was told to conduct a task. It did so. What part of that seems unintentional to you?
That’s not completing a task. That’s faking a result for appearance.
Is that what you’re advocating for ?
If I ask an llm to tell me the difference between aeolian mode and Dorian mode in the field of music , and it gives me the wrong info, then no it’s not working as intended
See I chose that example because I know the answer. The llm didn’t. But it gave me an answer. An incorrect one
I want you to understand this. You’re fighting the wrong battle. The llms do make mistakes. Frequently. So frequently that any human who made the same amount of mistakes wouldn’t keep their job.
But the investment, the belief in a.i is so engrained for some of us who so want a bright and technically advanced future, that you are now making excuses for it. I get it. I’m not insulting you. We are humans. We do that. There are subjects I am sure you could point at where I do this as well
But a.i.? No. It’s just wrong so often. It’s not it’s fault. Who knew that when we tried to jump ahead in the tech timeline, that we should have actually invented guardrail tech first?
Instead we let the cart go before the horses, AGAIN, because we are dumb creatures , and now people are trying to force things that don’t work correctly to somehow be shown to be correct.
I know. A mouthful. But honestly. A.i. is poorly designed, poorly executed, and poorly used.
It is hastening the end of man. Because those who have been singing it’s praises are too invested to admit it.
It simply ain’t ready.
Edit: changed “would” to “wouldn’t”
That was the task.
No, the task was To tell me the difference in the two modes.
It provided incorrect information and passed it off as accurate. It didn’t complete the task
You know that though. You’re just too invested to admit it. So I will withdraw. Enjoy your day.
I’ve read the article. If there is any dishonesty, it is on the part of the model creator or LLM operator.
You need to understand that lemmy has a lot of users that actually understand neural networks and the nuanced mechanics of machine learning FAR better than the average layperson.
It’s just semantics in this case. Catloaf’s argument is entirely centered around the definition of the word “lie,” and while I agree with that, most people will understand the intent behind the usage in the context it is being used in. AI does not tell the truth. AI is not necessarily accurate. AI “lies.”
AI returns incorrect results.
In this case semantics matter because using terms like halluilcinations, lies, honesty, and all the other anthromorphic bullshit is designed to make people think neural networks are far more advanced than they actually are.
Nn. It’s to make people who don’t understand llms be cautious in placing their trust in them. To communicate that clearly, language that is understandable to people who don’t understand llms need to be used.
I can’t believe this Is the supposed high level of discourse on lemmy
Lemmy users and AI have a lot of things in common, like being confidently incorrect and making things up to further their point. AI at least agrees and apologises when you point out that it’s wrong, it doesn’t double down and cry to the mods to get you banned.
I know. it would be a lot better world if a. I apologists could just admit they are wrong
But nah. They better than others.
It’s not “anthropomorphic bullshit”, it’s technical jargon that you’re not understanding because you’re applying the wrong context to the definitions. AI researchers use terms like “hallucination” to mean specific AI behaviours, they use it in their scientific papers all the time.
The language we use is quite important here because if we as a society value truth as a goal, the general public need to be made aware that these systems are truth-agnostic and that any truthfulness is merely a byproduct of stringing related tokens together. There is a word for assertions that don’t have any regard for the truth in the philosophical literature: bullshit. If this, more precise, language was widespread in regard to AI we might prevent future pollution of the truth as these systems become more widespread.
AI doesn’t lie, it just gets things wrong but presents them as correct with confidence - like most people.
That’s a huge, arrogant and quite insulting statement. Your making assumptions based on stereotypes
*you’re
You’re just as bad.
Let’s focus on a spell check issue.
That’s why we have trump
And A LOT of people who don’t and blindly hate AI because of posts like this.