• 0 Posts
  • 49 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • Speaking of slot machines, every slot machine, electronic poker machine, etc. are just state machines that operate based on a stream of random numbers fed into them by another device.

    The random number generators (RNG’s) used for gaming are highly regulated (at least here in the US) and only a small handful of companies make them. They have to be certified for use by organizations like The Nevada Gaming Control Board. RNGs have to be secured so only NGC officials and other key people can access them. If they are opened unexpectedly or otherwise tampered with then they need to go into lockdown and stop generating numbers until an official resets it.

    The RNGs also need to be able to replay sequences of numbers on demand. If the same sequence of numbers are fed into a game and the user plays the same way then the result of the game should be 100% identical each time.






  • ‘21 Model Y long range. Overall it drives well, and the supercharger network is really nice. We took it on a trip up & down a good portion of the east coast last year and never had any issues charging it. We have a couple 30 lb dogs that love going for rides, so things like dog mode are really nice as well.

    Things I really do not like:

    • The reliance on cameras for all sorts of features like auto high beams and auto wipers on top of traffic aware cruise control (aka autopilot) (and full self driving, if you have it). I regularly have the wipers go off on clear, sunny days. The auto high beams are so unreliable I don’t use them, and that means no autopilot at night. I have no faith in even trying out FSD because of how glitchy everything else is.
    • The minimal use of physical controls. I have to take my eyes off the road just to switch wiper speed/mode.
    • Software updates have, more than once, changed my settings for things like autopilot without warning, and I’ve only discovered it when driving and turning autopilot on.
    • The maps have lots of routing issues. It shows roads in my neighborhood that don’t yet exist (new development under construction), regularly routes me wrong ways (there’s a left turn near my home that it thinks it can’t take so it tries to route me two sides if a triangle as a result), and on our road trip we found a stretch of highway that it thought it couldn’t drive on and kept trying to route us along side streets. And there’s no way I know to report these issues so they can be fixed. Apps like Waze make that trivial.

    Pretty much all of these are reasons why I refuse to even try FSD and discourage others from using it. About the only way I’ll give it another chance is if a truly independent third party tests it and says all these issues have been resolved.


  • I admit I own a Tesla. Given all the recent erratic behavior:

    • Not only will I not recommend Teslas to anybody who might ask about it, I will warn them to look at company & CEO behavior over the years, and actively discourage others from buying one.
    • When the time comes, I will not be replacing my current car with another Tesla. I will still likely go with an EV, but by then there should be significantly more good (better) options available.

    About the only way I’ll change either of these will be for Elon to step down and completely remove himself from any control over Tesla. But I don’t see that happening and I certainly won’t be holding my breath.




  • A well thought out and implemented backup system, along with a good security setup is how you deal with malware. If backups won’t protect you from malware then you’re doing backups wrong. A proper backup implementation keeps a series of full backups plus incremental backups based on those full ones. So say your data doesn’t change very often, then you might do a full backup once a month and incremental ones twice a week. You keep 6 months of the combinations of full & incrementals, you don’t just overwrite the backups with new ones.

    If you’re doing backups like that and you suffer a malware attack then you have the ability to recover data as far as 6 months ago. The chances you don’t discover malware encrypting your data for 6+ months is tiny. If you’re really paranoid then you also test recovering files from random backups on a regular basis.

    My employer has detected and blocked multiple malware attacks using a combination of the above practices plus device management software that can detect unusual NAS activity and block suspect devices on our networks. Each time our security team was able to identify the encrypted files and restore over 99% from backups.


  • Suppose you’re hit by a ransomware attack and all the data on your NAS gets encrypted. Your RAID “backup” is just as inaccessible as everything else. So it’s not a backup. A true backup would let you recover from the ransomware attack once you have identified and removed the malware that allowed the attack.









  • I’m a 50+ year old IT guy who started out as a c/c++ programmer in the 90’s and I’m not that worried.

    The thing is, all this talk about AI isn’t very accurate. There is a huge difference in the LLM stuff that ChatGPT etc. are built on and true AI. These LLM’s are only as good as the data fed into them. The adage “garbage in, garbage out” comes to mind. Anybody that blindly relies on them is a fool. Just ask the lawyer that used ChatGPT to write a legal brief. The “AI” made up references to non-existent cases that looked and sounded legitimate, and the lawyer didn’t bother to check for accuracy. He filed the brief and it was the judge that discovered the brief was a work of fiction.

    Now I know there’s a huge difference between programming and the law, but there are still a lot of similarities here. An AI generated program is only going to be as good as the samples provided to it, and you’re probably want a human to review that code to ensure it’s truly doing what you want, at the very least.

    I also have concern that programming LLMs could be targeted by scammers and the like. Train the LLM to harvest sensitive information and obfuscate the code that does it so that it’s difficult for a human to spot the malicious code without a highly detailed analysis of the generated code. That’s another reason to want to know exactly what the LLM is trained on.


  • IphtashuFitz@lemmy.worldtoAsklemmy@lemmy.mlHome automation - why?
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    Since we’re an iPhone family I use iCloud3 in HomeAssistant to track our devices. After setting that up and associating the phones with people in HA it was just a matter of creating triggers based on us entering the home zone:

    alias: Somebody Arrives Home
    trigger:
      - platform: zone
        entity_id: person.jack
        zone: zone.home
        event: enter
      - platform: zone
        entity_id: person.jill
        zone: zone.home
        event: enter
    

    We live at the end of a dead-end, so I set up the home zone to extend down the road a bit. That gives iCloud3 enough time to figure out we’re home and trigger the automation in HA while we’re still approaching. I combine the above with a check to see if it’s roughly sunset to sunrise, and if it is then turn on the outdoor lights.

    For doing things like turning off the lights when nobody is home, I have a similar trigger for everybody leaving the home zone, followed by a conditional that verifies everybody is away:

    condition:
      - condition: or
        conditions:
          - condition: template
            value_template: >-
              {{ states('person.jack') != 'home' and states('person.jill') !=  'home'  }}