• 1 Post
  • 20 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle
  • This is the method I use in your scenario, OP. You can use Folder2iso to get the files in that you need. If the OS has official VMware tools, you can also mount the VMware Tools ISO straight from workstation into the VM and this will give you the clipboard service so you can copy and paste files between the host and VM, if this scenario is permitted within your isolation needs.

    Otherwise, go the ISO route. You just can’t bring stuff out of the VM back to the host is all.



  • Its very much still needed and heavily utilised in the enterprise world. Volume size is usually the lowest priority when it comes to arrays, redundancy and IOPS (the amount of concurrent transactions to the storage) is typically the priority. The exception here would be backup and archive storage, where IOPS is less important and volume size is more important.

    As far as replacing sectors goes, I’ve never heard of this and I might just be ignorant on the subject but as far as I know you can’t “replace” a bad sector. Only mark it as bad and not use it, and whatever was there before is gone. This has existed since HDD days. This is also why we use RAID - parity across disks to protect data.

    Generally production storage will be in RAID-10, and backup/archive storage in RAID-6 or in some cases RAID-60 but I’m personally not a fan.

    You also would consider how many disks are in the volume because there is a sweet spot. Too many disks = higher likelihood of total array failure due to simultaneous disk failures and more data loss in the event it does, but too few disks and you won’t have good redundancy, capacity or performance either (depending on RAID level).

    The biggest change I see in RAID these days is moving away from hardware RAID cards and into software-based solutions like Microsoft Storage Spaces, md, ZFS and similar. These all have their own way of doing things and some can even synchronise the data with other hosts.

    Hope this helps!




  • TIL about Fedora, last I knew it was a rolling bleeding edge OS. Clearly lots of movement in the Red Hat camp.

    As for gaming, drivers were not the problem for me. Getting games to run with ease was. On OpenSUSE, I just install Steam, enable Proton and basically go at that point. Red Hat was non-trivial to do this. Could be a skill issue, but I had a better time getting going with OpenSUSE TW.


  • Sort of, OpenSUSE Tumbleweed. I started on OpenSUSE Leap but had issues getting things like GPU and Steam working. Red Hat was also a non-starter because of the lack of gaming functionality.

    TW works great for gaming and the enterprise features I care about (like domain joining) work out of the box. Its certainly harder to set up than something more geared towards home use (typically one of the various the downstreams of Debian or Arch) but that doesn’t bother me.


  • Servers are a different story but for Desktop, OpenSUSE.

    Because:

    • It’s stable even on their rolling OS (Tumbleweed)
    • Gaming works exceptionally well
    • CUDA works with little effort
    • RPM-based (personal preference)
    • zypper is an excellent package manager and my experience has been better than that of yum/dnf
    • Extensive native packages and 3rd party repos
    • No covert advertising in the OS
    • Minimal (no?) Telemetry
    • Easy to bind to active directory
    • it feels polished and well built
    • I do not have to mess with it to make it work

    Part of my transition from Windows to Linux was that basic tasks like installing software or even the OS itself shouldn’t be a high effort endeavour. I should be able to point to a package file or run a package manager and be able to go about my day without running “make” and working my way through dependency hell.

    I say this as a Linux user of all different flavours for well over 15 years who has a deep love for what it brings to the table. If we want it to be common place with non-IT folks, it needs to work and it needs to be simple to use.


  • Jumping on the OpenSUSE bandwagon. I use it daily, have been running the same install of Tumbleweed for years without issue. I’m using KDE Plasma which it let’s you choose as part of the installation which fulfils that requirement for you as well.

    If you’re familiar with Redhat you’ll feel at home on it. Zypper is the package manager instead of yum/dnf and works really well (particularly when coping with dependency issues.

    I’ve worked with heaps of distros over the years (Ubuntu, Debian, Fedora, RHEL, old school Red Hat, CentOS, Rocky, Oracle, even a bit of Alpine and some BSD variants) and OpenSUSE is definitely my favourite for a workstation.









  • Not that I’m advocating for Apple’s inexcusable behaviour, but as someone who’s worked in IT managing fleets of hundreds of Thinkpads (among others like Apple, Dell, Acer, HP), respectfully, they are far less reliable and durable than a MacBook. The only devices I had with higher failure rates than ThinkPads were Acer laptops.

    They are certainly more repairable, but so are others like Dell and HP. Lenovo were one of the earlier manufacturers to pull some anti-repair moves such as soldering memory to the mainboard (on the Yoga models).

    I think your statement is far more accurate in the days when IBM owned the ThinkPad brand, but unfortunately Lenovo have run it into the ground as far as quality goes.

    All that said, I certainly hope we see more projects like Framework so that these big manufacturers can get some sort of reality check.




  • Sure, I’ve used it both in Server and NAS scenarios. The NAS was where we had most issues. If the maintenance tasks for BTRFS weren’t scheduled to run (balance, defrag, scrub and another one i can’t recall), the disk could become “full” without actually being full. If I recall correctly it’s to do with how it handles metadata. There’s space, but you can’t save, delete or modify anything.

    On a VM, its easy enough to buy time by growing the disk and running the maintenance. On a NAS or physical machine however, you’re royally screwed without adding more disks (if its even an option). This “need to have space to make space” thing was pretty suboptimal.

    Granted now I know better and am aware of the maintenance tasks, I simply schedule them (with cron or similar). But I still have a bit of a sour taste from it, lol. Overall I don’t think it’s a bad FS as long as you look after it.


  • This for sure. As a general rule of thumb, I use XFS for RPM-based distros like Red Hat and SuSE, EXT4 for Debian-based.

    I use ZFS if I need to do software RAID and I avoid BTRFS like the plague. BTRFS requires a lot of hand holding in the form of maintenance which is far from intuitive and I expect better from a modern filesystem (especially when there are others that do the same job hassle free). I have had FS-related issues on BTRFS systems more than any other purely because of issues with how it handles data and metadata.

    In saying all that, if your data is valuable then ensure you do back it up and you won’t need to worry about failures so much.