How would you distinguish a sufficiently advanced word calculator from an actual intelligent, conscious agent?
The same way you distinguish a horse with a plastic horn from a real unicorn: you won’t see a real unicorn.
In other words, your question disregards what the text says, that you won’t get anything remotely similar to an actual intelligent agent through those large token models. You need a different approach, acknowledging that linguistic competence is not the same as reasoning.
Nota bene: this does not mean “AGI is impossible”. That is not what I’m saying. I’m saying “LLMs are a dead end for AGI”.
The same way you distinguish a horse with a plastic horn from a real unicorn: you won’t see a real unicorn.
In other words, your question disregards what the text says, that you won’t get anything remotely similar to an actual intelligent agent through those large token models. You need a different approach, acknowledging that linguistic competence is not the same as reasoning.
Nota bene: this does not mean “AGI is impossible”. That is not what I’m saying. I’m saying “LLMs are a dead end for AGI”.