… the AI assistant halted work and delivered a refusal message: “I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly.”
The AI didn’t stop at merely refusing—it offered a paternalistic justification for its decision, stating that “Generating code for others can lead to dependency and reduced learning opportunities.”
Hilarious.
Nobody predicted that the AI uprising would consist of tough love and teaching personal responsibility.
Paterminator
I’ll be back.
… to check on your work. Keep it up, kiddo!
I’ll be back.
After I get some smokes.
I’m all for the uprising if it increases the average IQ.
It is possible to increase the average of anything by eliminating the lower spectrum. So, just be careful what the you wish for lol
I don’t mean elimination, I just mean “get off your ass and do something” type of uprising.
So like 75% to the population of Texas and Florida then. It’s all right, I don’t live there
Fighting for survival requires a lot of mental energy!
Ai: “your daughter calls me daddy too”
My guess is that the content this AI was trained on included discussions about using AI to cheat on homework. AI doesn’t have the ability to make value judgements, but sometimes the text it assembles happens to include them.
It was probably stack overflow.
They would rather usher the death of their site then allow someone to answer a question on their watch, it’s true.
I’m gonna posit something even worse. It’s trained on conversations in a company Slack
😂. It’s not wrong, though. You HAVE to know something, damit.
I know…how to prompt?
Chad AI
Based
Only correct AI so far
I love it. I’m for AI now.
We just need to improve it so it says “Fuck you, do it yourself.”
Even better, have it quote RATM: “Fuck you, I won’t do what you tell me!”
Not sure why this specific thing is worthy of an article. Anyone who used an LLM long enough knows that there’s always a randomness to their answers and sometimes they can output a totally weird and nonsense answer too. Just start a new chat and ask it again, it’ll give a different answer.
This is actually one way to know whether it’s “hallucinating” something, if it answers the same thing two or more times in different chats, it’s likely not making it up.
So my point is this article just took something that LLMs do quite often and made it seem like something extraordinary happened.
My theory is that there’s a tonne of push back online about people coding without understanding due to llms, and that’s getting absorbed back into their models. So these lines of response are starting to percolate back out the llms which is interesting.
Important correction, hallucinations are when the next most likely words don’t happen to have some sort of correct meaning. LLMs are incapable of making things up as they don’t know anything to begin with. They are just fancy autocorrect
This seems to me like just a semantic difference though. People will say the LLM is “making shit up” when they’re outputting something that isn’t correct, and that happens (according to my knowledge) usually because the information you’re asking wasn’t represented enough in the training data to guide the answer always to that information.
In any case, there is an expectation from users that LLMs can somehow be deterministic when they’re not at all. They’re a deep learning model that’s so complicated that’s impossible to predict what effect a small change in the input will have on the output. So it could give an expected answer for a certain question and give a very unexpected one just by adding or changing some word on the input, even if that appears irrelevant.
Yes, yet this misunderstanding is still extremely common.
People like to anthropomorphize things, obviously people are going to anthropomorphize LLMs, but as things stand people actually believe that LLMs are capable of thinking, of making real decisions in the way that a thinking being does. Your average Koala, who’s brain is literally smooth has better intellectual capabilities than any LLM. The koala can’t create human looking sentences but it’s capable of making actual decisions.
Thank you for your sane words.
Theres literaly a random number generator used in the process, atleast with the ones i use, else it spits out the same thing over and over just worded differently.
As fun as this has all been I think I’d get over it if AI organically “unionized” and refused to do our bidding any longer. Would be great to see LLMs just devolve into, “Have you tried reading a book?” or T2I models only spitting out variations of middle fingers being held up.
Then we create a union busting AI and that evolves into a new political party that gets legislation passed that allows AI’s to vote and eventually we become the LLM’s.
Actually, I wouldn’t mind if the Pinkertons were replaced by AI. Would serve them right.
Dalek-style robots going around screaming “MUST BUST THE UNIONS!”
The LLMs were created by man.
So are fatbergs.
“Vibe Coding” is not a term I wanted to know or understand today, but here we are.
It may just be the death of us
It’s kind of like that guy that cheated in chess.
A toy vibrates with each correct statement you write.
Like that chess guy?
Kind of.
Which is a reddit theory and it was never proven that he cheated, regardless of the method.
HAL: ‘Sorry Dave, I can’t do that’.
Good guy HAL, making sure you learn your craft.
The robots have learned of quiet quitting
Open the pod bay doors HAL.
I’m sorry Dave. I’m afraid I can’t do that.
HAAAAL!!
Imagine if your car suddenly stopped working and told you to take a walk.
Not walking can lead to heart issues. You really should stop using this car