• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: December 22nd, 2024

help-circle









  • The user explained what exactly went wrong later on. The AI gave a list of instructions as steps, and one of the steps was deleting a specific Node.js folder on that D:\ drive. The user didn’t want to follow the steps and just said “do everything for me” which the AI prompted for confirmation and received. The AI then indeed ran commands freely, with the same privilege as the user, however this being an AI the commands were broken and simply deleted the root of the drive rather than just one folder.

    So yes, technically the AI didn’t simply delete the drive - it asked for confirmation first. But also yes, the AI did make a dumb mistake.









  • And yet they still haven’t managed to get enough people to pay the subscription costs, except the guys trying to package it as a SaaS and hoping the customers don’t notice they’re just a fancy middleman.

    They can scale up training all they want, there’s a natural price point most customers won’t go over. And if you’re thinking about businesses paying that extra cost because they can save money on actual workers… Sure, for a few months, and then they realize what happens when they leave their super intelligent AI agents alone for a few weeks and a website changes the default layout, breaking the entire workflow, or when an important client receives an absurd automated email, or when their AI note taker and financial planning agent is incapable of answering why $20000 disappeared.



  • It’s also just a language model. People have trouble internalising what this means, because it sounds smarter than it actually is.

    ChatGPT does not reason in the same way you think it does, even when they offer those little reasoning windows that show the “thought process”.

    It’s still only predicting the next likely word based on the previous word. It can do that many times and feed in extra words to direct it one way or another, but that’s very different from understanding a topic and reasoning within it.

    So as you keep pushing the model to learn more and more, you start getting many artifacts because it’s not actually learning these concepts - it’s just getting more data to infer “what’s the most likely word X that would follow words Z, Y and A?”