

You’re really willing to die on the hill of poop camera subscriptions, I’m not willing to waste time diving further into the subject. You’re ignorant of the topic, which is fine, but I won’t be the one explaining further.


You’re really willing to die on the hill of poop camera subscriptions, I’m not willing to waste time diving further into the subject. You’re ignorant of the topic, which is fine, but I won’t be the one explaining further.


Because it’s an incredibly unreliable data point by itself, and requires significantly more than visual analysis to prevent several co-variables.


Yes and you don’t have to hire a plumber to fix your sink if you’re a plumber.
You are severely misunderstanding the point being made. Imagine you have a leaky pipe, you hire a professional plumber, they charge you $500 and say “yep, I can take a look and I conclude it’s a leaky pipe! My job here is done, see you next time. I can also give you an AI generated list of reasons pipes often get leaky”
What I’m precisely telling you is that this company can’t provide the professional analysis you just commented.


You’re telling me there’s zero valuable information in photos of feces?
Nope. I’m saying a private company and whatever training set they have, plus a cheap RGB camera and an AI model, is not going to give you any information that you can’t derive by simply looking at the feces yourself, much like the table you just linked. Though that table itself is an oversimplification that, being unable to take other parameters into account, also contains potentially misleading conclusions.


My field is bioinformatics. I’m willing to bet $500 there’s little to no valuable data being gathered at all, and quite a lot of noise, rather than anything relevant for your health. I’m sure, just like your smart watch, they can make it sound like some deep insights and health exploration, but I guarantee you it’s not.


A subscription… for a toilet? Internet access… for a toilet? Cameras… for a toilet? Am I having a fever dream?


I buy single purpose devices that are fully offline, durable, user serviceable, and useful… and then I go for a long time without buying anything but food. It’s almost like setting a new personal record: how many days in a row I can go without buying a single thing?


Kicking off? It started ages ago.
The user explained what exactly went wrong later on. The AI gave a list of instructions as steps, and one of the steps was deleting a specific Node.js folder on that D:\ drive. The user didn’t want to follow the steps and just said “do everything for me” which the AI prompted for confirmation and received. The AI then indeed ran commands freely, with the same privilege as the user, however this being an AI the commands were broken and simply deleted the root of the drive rather than just one folder.
So yes, technically the AI didn’t simply delete the drive - it asked for confirmation first. But also yes, the AI did make a dumb mistake.


The Steam Deck is not sold at a loss. The initial pricing for the 64 GB unit was barely profitable, but this quickly changed with production ramping up.
This was confirmed by Valve themselves in an interview that happened months after Gabe’s famous comments about the pricing.
So yes, Valve profits from the games too, but that’s not used to subsidize the Steam Deck’s price.


Eh, that’s the problem with email - it’s much harder to change and migrate, because you can’t guarantee others will use your new email, much less find out who somehow still has the old email to send messages to and expects a reply from.


80% is 80%, there’s no “80% that will last a lot longer”


The Deck automatically stops charging and let’s the battery drain to around 95% when plugged in anyway.


There’s absolutely no way a setting buried in a menu is designed to be constantly enabled and disabled based on when you’re using the device docked or not.
Otherwise, the toggle would exist in the quick access menu.
That’s also not how it works on laptops that offer it, so I doubt the idea is having users constantly toggling it.


No, I’m referring to the fact that unplugging early to avoid decreasing the battery health and therefore capacity makes no sense… Because you’re decreasing the battery capacity by only using 80% of it’s charge


The logic is deeply flawed though.
Keep your battery at 80% to preserve it’s health, because Lithium batteries prefer that. Sure. But here’s what it effectively means:
Keep your battery forever stuck at 80%… to avoid losing battery capacity… so to avoid having less battery runtime you limit your battery runtime… Thus suffering today the consequences you feared in the future.


And yet they still haven’t managed to get enough people to pay the subscription costs, except the guys trying to package it as a SaaS and hoping the customers don’t notice they’re just a fancy middleman.
They can scale up training all they want, there’s a natural price point most customers won’t go over. And if you’re thinking about businesses paying that extra cost because they can save money on actual workers… Sure, for a few months, and then they realize what happens when they leave their super intelligent AI agents alone for a few weeks and a website changes the default layout, breaking the entire workflow, or when an important client receives an absurd automated email, or when their AI note taker and financial planning agent is incapable of answering why $20000 disappeared.


You’re not wrong, but it’s still a language model, which is not the entirety of how intelligence and reasoning works. There are clear limitations that do not arise only from people toying around with ChatGPT, but are known for decades of theoretical understanding of what language is.


It’s also just a language model. People have trouble internalising what this means, because it sounds smarter than it actually is.
ChatGPT does not reason in the same way you think it does, even when they offer those little reasoning windows that show the “thought process”.
It’s still only predicting the next likely word based on the previous word. It can do that many times and feed in extra words to direct it one way or another, but that’s very different from understanding a topic and reasoning within it.
So as you keep pushing the model to learn more and more, you start getting many artifacts because it’s not actually learning these concepts - it’s just getting more data to infer “what’s the most likely word X that would follow words Z, Y and A?”
Oooof