• 0 Posts
  • 9 Comments
Joined 3 months ago
cake
Cake day: August 30th, 2025

help-circle
  • And their products are so fucking shit. Today I wanted to shit post in a Discord server I’m a part of. I felt like if I put effort into it, it wouldn’t really be a shit post any more. The idea was minimum effort for a few laughs and we move on. So I loaded up ChatGPT and asked it to generate the meme image. I thought even if it messed up the text, I would just generate it without the text and put the text in with gimp or something.

    I put in the prompt, it spewed a lot of nonsense about what I meant and how it was going to generate the image. If I would just say “Generate it”, it would generate the image. So I did, it then said I needed to be signed in for image generation. OK fine, I signed in with a Gmail account I only ever use for spam, just for occasions such as this. It was happy to start generating.

    It hung on generating for a while, until it said done in the status thing top right, but nothing in the chat. I refreshed the page, which gave me the option to prompt again. I asked where is the generated image? It said here it is and presented a gray box. It said if you see a gray box you uploaded it wrong? Wtf are you talking about? I didn’t upload anything. It said it could try generating again. Same exact result, crashing on generation, refresh yielding a new different gray box.

    Like for fucks sake, the one thing I thought it would be good at, low effort shitposting, it failed at. Why the fuck does this company have such a large market cap?

    I can’t wait for this whole AI debacle to be over and done with. Nobody is ever going to pay for your buggy ass bullshit generator.


  • Also just because the code works, doesn’t mean it’s good code.

    I’ve had to review code the other day which was clearly created by an LLM. Two classes needed to talk to each other in a bit of a complex way. So I would expect one class to create some kind of request data object, submit it to the other class, which then returns some kind of response data object.

    What the LLM actually did was pretty shocking, it used reflection to get access from one class to the private properties with the data required inside the other class. It then just straight up stole the data and did the work itself (wrongly as well I might add). I just about fell of my chair when I saw this.

    So I asked the dev, he said he didn’t fully understand what the LLM did, he wasn’t familiar with reflection. But since it seemed to work in the few tests he did and the unit tests the LLM generated passed, he thought it would be fine.

    Also the unit tests were wrong, I explained to the dev that usually with humans it’s a bad idea to have the person who wrote the code also (exclusively) write the unit tests. Whenever possible have somebody else write the unit tests, so they don’t have the same assumptions and blind spots. With LLMs this is doubly true, it will just straight up lie in the unit tests. If they aren’t complete nonsense to begin with.

    I swear to the gods, LLMs don’t save time or money, they just give the illusion they do. Some task of a few hours will take 20 min and everyone claps. But then another task takes twice as long and we just don’t look at that. And the quality suffers a lot, without anyone really noticing.


  • I’m currently using a lot of those mini hygrometer sold under a lot of brands. Mine are branded Brifit, but I’ve seen other brands like Oria and Ankilo and many others.

    Here is a listing from Amazon, please don’t buy it there, but to get an idea of the price and the specs:

    https://www.amazon.de/dp/B0D1FS8VR6

    They use bluetooth and are powered by a little CR2477 coin cell battery, which lasts about a year (varies a bit, I’ve gotten some to last 10 months, others 14 months). I bought a bunch of them at once, which drops the price a lot. I’ve been using them for about 2 years now and they seem to be accurate and report data often (every few seconds). They are very small and can be tucked away somewhere (be sure they get good airflow tho, so it measures correctly). They are strictly for indoor use only.

    My Homeassistant server (tiny old Intel Nuc thing) is pretty central in my home and I have a USB bluetooth stick attached to it. It’s on an USB extension cable to have the antenna of the bluetooth stick out of the enclosure it’s in for better reception. The sensors are scattered throughout the house and all seem to have an excellent connection. The USB stick I use is a UGreen one which is very common, with excellent support in Homeassistant.

    I think this is the stick I have, at least the picture matches, again please don’t buy from Amazon.

    https://www.amazon.de/dp/B0BXF13GB7

    UGreen stuff is pretty good and sold in a lot of places.

    Both the bluetooth stick and the sensors are China special, but these days it’s very hard to find anything that isn’t. Quality seems great tho.


  • Yes smart thermostats are great. I live alone and have a somewhat random schedule. Being able to turn on the heat before heading home is a total game changer. If I’m away when I’m usually at home, I can change the schedule in advance, or change it when I already left if I forgot. This helps save money. It can also track usage, so you can double check your energy bill with your actual usage. Although I have a Homeassistant setup with sensors to track usage from the meters, but still a useful tool to have. If you use gas for example for heating and hot water, the thermostat can give the data needed to split up the gas bill between those and see where savings are to be had. It’s also an extra temperature and humidity sensor, keeping track of how comfortable your home is and it’s possible to act not just on temperature, but other factors as well. I have a bunch of temperature sensors scattered in my home and the curves are useful for tweaking heating and ventilation in the home. Giving an optimal balance between cost and comfort. And preventing things like mold, which might save on heating in the short term, but put on costs in the long term with health issues and mold damage.

    I have a lot of automation, but I have one rule. Everything must still basically work when the internet is out or the home automation has issues. So I use physical switches with sensors and relays, when everything fails the lights will still turn on and off with the switch. If there is no internet, physically turning the thermostat up or hitting the big override button next to the heater still turns on the heat. Stuff like that is important, it’s a luxery and a convenience, but it must never become a hindrance.

    I try to use open source stuff where I can and have contributed to some projects. I’ve made stuff myself like sensors with self made pcbs in 3D printed enclosures. But I also use some proprietary stuff, like for example the Nest thermostat. I bought it about 10 years ago and mostly because I loved the design. This was when they were recently acquired by Google and were still fully autonomous. Back then there weren’t many alternatives and the Nest was by far the best looking one (imho). The software absolutely sucks, the old Nest app didn’t get many updates with Google, but the older models still only work in the Nest app. But with Homeassistant I can work around most of it. It’s a shame because Nest had so much potential and was doing good stuff, now under Google their product are kinda meh.


  • Think of it this way:

    If I ask you can a car fly? You might say well if you put wings on it or a rocket engine or something, maybe? OK, I say, so I point at a car on the street and ask: Do you think that specific car can fly? You will probably say no.

    Why? Even though you might not fully understand how a car works and all the parts that go into it, you can easily tell it does not have any of the things it needs to fly.

    It’s the same with an LLM. We know what kinds of things are needed for true intelligence and we can easily tell the LLM does not have the parts required. So an LLM alone can never ever lead to AGI, more parts are needed. Even though we might not fully understand how the internals of an LLM function in specific cases and might also not know what parts exactly are needed for intelligence or how those work.

    A full understanding of all parts isn’t required to discern large scale capabilities.