AI tech bros are like we’re going to build technology so powerful we have been warned about it for generations without any ethics or regulations to eliminate all of your jobs and to make you obsolete while we strip all of your social safety nets, build ai mass surveillance against your 4th amendment rights and militarize your streets so you and everyone you love will either die or go to prison for being homeless and work as a slave for us in that prison and we’re going to use your money for our evil plan and we’re all like maybe we can just vote blue no matter who even though they pay them to just pretend to try to do something about this but they ultimately do nothing because a few magically turn facist when it matters so basically we’re just going to do nothing even though we know we’re being marched to our death by psychopaths who are very vocal about their intentions….am I following the story? Am I wrong?
You’re forgetting that, like all of americas problems, the solution has been found elsewhere and you’re just not going to do it because reasons I guess? Idk. I’m not American. Thank god.
Yeah watching your loved ones die without being able to access healthcare has been soul crushing torture for me.
A wise man once said “The ability to speak does not make you intelligent.”
And that man was Ra’s Al Ghul
I mean, yes but no
How rude
I keep saying that those llm peddlers are selling us a brain, when at most they only deliver Wernicke’s + Broca’s area of a brain.
Sure, they are necessary for a human like brain, but it’s only 10% of the job done my guys.
LLMs are actually very, very useful for certain things.
The problem isn’t that they lack utility. It’s that they’re constantly being shoehorned into area where they aren’t useful.
They’re great at surfacing nee knowledge for things you don’t have a complete picture of. You can’t take that knowledge at face value but a framework that you can validate with external sources can be a massive timesaver
They’re good at summarizing text. They’re good at finding solutions to very narrow and specific coding challenges.
They’re not useful at providing support. They are not useful at detailing specific, technical issues. They are not good friends.
when at most they only deliver Wernicke’s + Broca’s area of a brain.
Not even. LLMs don’t really understand what you say, and their output is often nonsensical babble.
you’re right. More like discussing with an Alzheimer’s addled brain being coerced into a particular set of vocabulary.
CUTTING EDGE RESEARCH SHOWS something everybody already knew and were saying for years.
Something I was taught in film school 15 years ago was that communication happens when a message is perceived. Whether the message was intended or not is irrelevant. And yet here we are, “communicating” with a slightly advanced autocomplete algorithm and calling it intelligent.
Let me grab all your downvotes by making counterpoints to this article.
I’m not saying that it’s not right to bash the fake hype that the likes of altman and alienberg are making with their outlandish claims that AGI is around the corner and that LLM are its precursor. I think that’s 100% spot on.
But the news article is trying to offer an opinion as if it’s a scientific truth, and this is not acceptable either.
The basis for the article is the supposed “cutting-edge research” that shows language is not the same as intelligence. The problem is that they’re referring to a publication from last year that is basically an op-ed, where the authors go over existing literature and theories to cement their view that language is a communication tool and not the foundation of thought.
The original authors do acknowledge that the growth in human intelligence is tightly related to language, yet assert that language is overall a manifestation of intelligence and not a prerequisite.
The nature of human intelligence is a much debated topic, and this doesn’t particularly add to the existing theories.
Even if we accept the authors’ views, then one might question if LLMs are the path to AGI. Obviously many lead researchers in AI have the same question - most notably, Prof LeCun is leaving Meta precisely because he has the same doubts and wants to progress his research through a different path.
But the problem is that the Verge article then goes on to conclude the following:
an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.
This conclusion is a non sequitur. It generalizes a specific point about the capacity of LLMs to evolve into true AGI or not, into an “AI dumb” catchall that ignores even the most basic evidence that they themselves give - like being able to “solve” go, or play chess in a way that no human can even comprehend - and, to top it off, conclude that “it will never be able to” in the future.
Looking back at the last 2 years, I don’t think anyone can predict what AI research breakthroughs might happen in the next 2, let alone “forever”.
A probabilistic “word calculator” is not an intelligent, conscious agent? Oh noes! 🙄😅
I’ll bite.
How would you distinguish a sufficiently advanced word calculator from an actual intelligent, conscious agent?
How would you distinguish a sufficiently advanced word calculator from an actual intelligent, conscious agent?
The same way you distinguish a horse with a plastic horn from a real unicorn: you won’t see a real unicorn.
In other words, your question disregards what the text says, that you won’t get anything remotely similar to an actual intelligent agent through those large token models. You need a different approach, acknowledging that linguistic competence is not the same as reasoning.
Nota bene: this does not mean “AGI is impossible”. That is not what I’m saying. I’m saying “LLMs are a dead end for AGI”.
If I can’t meet it and could only interact with it through a device, then I could be fooled, of course.
But then how can you tell that it’s not an actual conscious being?
This is the whole plot of so many sci-fi novels.
Because it simply isn’t, it isn’t aware of anything because such algorithm, if it can exist, hasn’t been created yet! It doesn’t “know” anything because the it we’re talking about is probabilistic code fed the internet and filtered through the awareness of actual human beings who update the code. If this were a movie, you’d know it too if you saw the POV of the LLM and the guy trying to trick you, making sure the text is human whenever it went too off the rails… but that’s already the reality we live in and it’s easily checked! You’re thinking of an actual AI, which perhaps could exist one day, but God knows. There is research that indicates consciousness to be a quantum process, and philosophically and mathematically it’s just non-computational (check Roger Penrose!), so we might still be a bit away from recreating consciousness. 🤷

Linguists have been saying this over and over, but almost everybody ignored it.
Linguists were divided until recently, to be fair.
The main division was about why language appeared; to structure thought, communication, or both. But I genuinely don’t think anyone serious would claim reasoning appeared because of language. …or that if you feed enough tokens to a neural network it’ll become smart.
Well, and whether intelligence is required for mastery of language. Not even that long ago, in 2009, my linguistics professor held a forum discussion within the linguistics, informatics, and philosophy departments at my school where they each gave their perspectives on whether true mastery of language could exist without intelligence.
Well duh… Most politicians can talk.
This is not really cutting edge research. These limitations were described philosophically for millenia. Then again mathematically through the various AI summers and winters since 1943.
LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning …
Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.
But take away language from a large language model, and you are left with literally nothing at all.
The author seems to be making the assumption that a LLM is the equivalent of the language processing parts of the brain (which according to the cited research supposedly focus on language specifically and the other parts of the brain do reasoning) but that isn’t really how it works. LLMs have to internally model more than just the structure of language because text contains information that isn’t just about the structure of language. The existence of Multimodal models makes this kind of obvious; they train on more input types than just text, whatever it’s doing internally is more abstract than only being about language.
Not to say the research on the human brain they’re talking about is wrong, it’s just that the way they are trying to tie it in to AI doesn’t make any sense.
Took a lot of scrolling to find an intelligent comment on the article about how outputting words isn’t necessarily intelligence.
Appreciate you doing the good work I’m too exhausted with Lemmy to do.
(And for those that want more research in line with what the user above is talking about, I strongly encourage checking out the Othello-GPT line of research and replication, starting with this write-up from the original study authors here.)
Because what we call intelligence (the human kind) usually is just an emergent property of the wielding of various combinations of fist or second-hand experience by “consciousness” which itself is…
What we like to call the tip of a huge fucking iceberg of constant lifelong internal dialogues, overlapping and integrating experiences all the way back to the memories (engrams, assemblies, neurons that wired together to represent something), even the ones so old or deep we can’t even summon them any longer, but often are still measurable, still there, integrating like lego bricks with other assemblies.
Humans continuously, reflexively, recursively tell and re-tell our own stories to ourselves all day, and even at night, just to make sense of the connections we made today, how to use them tomorrow, to know how they relate to connections we made a lifetime ago, and how it fits in the larger story of us. That “context integration window” absolutely DWARFS even the deepest language model, even though our own organic “neural net” is low-power, lacks back-propagation, etc etc, and it is all done using language.
So yes, language is not the same as intelligence (though at some point some would ask “who can tell the difference?”) HOWEVER… The semantic taxonomies, symbolic cognition, and various other mental tools that are enabled by language are absolutely, verifiably required for this gargantuan context integration to take place.
Monied intrests beat science every day.
CEOs are just hyping bullshit
Somebody tell these absolute idiots that AI is NOT AN F’IN BUBBLE!
Here’s proof the USD and government bonds are the bubble, from Mark Moss: https://inv.nadeko.net/watch?v=xGoPdHH9PlE
Whataboutism + false dichotomy.










