Yeah DNS is, in general, just goofy and weird and a lot of the interactions I wouldn’t expect someone who’s done it for years to necessarily know.
And besides, the round-robin thing is my favorite weird DNS fact so any excuse to share it is great.
Yeah DNS is, in general, just goofy and weird and a lot of the interactions I wouldn’t expect someone who’s done it for years to necessarily know.
And besides, the round-robin thing is my favorite weird DNS fact so any excuse to share it is great.
I mean, recovery from parity data is how all of this works, this just doesn’t require you to have a controller, use a specific filesystem, have matching sized drives or anything else. Recovery is mostly like any other raid option I’ve ever used.
The only drawback is that the parity data is mostly equivalent in size to the actual data you’re making parity data of, and you need to keep a couple copies of indexes since if you lose the index or the parity data, no recovery for you.
In my case, I didn’t care: I’m using the oldest drives I’ve got as the parity drives, and the newer, larger drives for the data.
If i were doing the build now and not 5 years ago, I might pick a different solution but there’s something to be said for an option that’s dead simple (looking at you, zfs) and likely to be reliable because it’s not doing anything fancy (looking at you, btrfs).
From a usage (not technical) standpoint, the most equivalent commercial/prefabbed solution would probably be something like unraid.
A tool I’ve actually found way more useful than actual raid is snapraid.
It just makes a giant parity file which can be used to validate, repair, and/or restore your data in the array without needing to rely on any hardware or filesystem magic. The validation bit being a big deal, because I can scrub all the data in the array and it’ll happily tell me if something funky has happened.
It’s been super useful on my NAS, where it’s the only thing standing between my pile of random drives and data loss.
There’s a very long list of caveats as to why this may not be the right choice for any particular use case, but for someone wanting to keep their picture and linux iso collection somewhat protected (use a 321 backup strategy, for the love of god), it’s a fairly viable option.
Uh, don’t do that if you expect your mail to be delivered.
Multiple PTRs, depending on how the DNS service is set up, may be returned in round-robin fashion, and if you return a PTR that doesn’t match what your HELO claims you are, then congrats on your mail being likely tossed in the trash.
Pick the most accurate name (that is, match your HELO domain), and only set one PTR.
(Useless fact of the day: multiple A records behave the same way and you can use that as a poverty-spec version of a load balancer.)
sudo smartctl -a /dev/yourssd
You’re looking for the Media_Wearout_Indicator which is a percentage starting at 100% and going to 0%, with 0% being no more spare sectors available and thus “failed”. A very important note here, though, is that a 0% drive isn’t going to always result in data loss.
Unless you have the shittiest SSD I’ve ever heard of or seen, it’ll almost certainly just go read-only and all your data will be there, you just won’t be able to write more data to the drive.
Also you’ll probably be interested in the Total_LBAs_Written variable, which is (usually) going to be converted to gigabytes and will tell you how much data has been written to the drive.
Hell, maybe not since 1997!
Office 2000 was peak office: it had the definitive version of Clippit, and every actually useful feature you’ll probably ever need to type and edit any sort of document.
…I will say, though, that Excel has improved for the weirdos that want 100,000 row spreadsheets since then, but I mean, that’s a small group of people who need serious help.
This has nothing to do with anything, but whatever.
As a FunFact™, you’re more likely to have the SSD controller die than the flash wear out at this point.
Even really cheap SSDs will do hundreds and hundreds of TB written these days, and on a normal consumer workload we’re talking years and years and years and years of expected lifespan.
Even the cheap SSDs in my home server have been fine: they’re pushing 5 years on this specific build, and about 200 TBW on the drives and they’re still claiming 90% life left.
At that rate, I’ll be dead well before those drives fail, lol.
Hell I almost got snagged by one recently, and a goodly portion of my last job was dealing with phishing sites all day.
They’ve gotten good with making things look like a proper email from a business that would be sending that kind of email, and if you’re distracted and expecting something you can have at least a moment of ‘oh this is probably legitimate’.
The giveaway was, hilariously, a case of using ‘please kindly’ and ‘needful’ which uh, aren’t something this particular company would have actually used as phraseology in an email, so saved by scammers not realizing that americans at least don’t actually use those two phrases in conversation.
I just uh, wrote a bash script that does it.
It dumps databases as needed, and then makes a single tarball of each service. Or a couple depending on what needs doing to ensure a full backup of the data.
Once all the services are backed up, I just push all the data to a S3 bucket, but you could use rclone or whatever instead.
It’s not some fancy cool toy kids these days love like any of the dozens of other backup options, but I’m a fan of simple and well, a couple of tarballs in a S3 bucket is about as simple as it gets since restoring doesn’t require any tools or configuration or anything: just snag the tarballs you need, unarchive them, done.
I also use a couple of tools for monitoring the progress and a separate script that can do a full restore to make sure shit works, but that’s mostly just doing what you did to make and upload the tarballs backwards.
I’m finding 8 years to be pretty realistic for when I have drive failures, and I did the math when I was buying drives and came to the same conclusion about buying used.
For example, I’m using 16tb drives, and for the Exos ones I’m using, a new drive is like $300 and the used pricing seems to be $180.
If you assume the used drive is 3 years old, and that the expected lifespan is 8 years, then the used drive is very slightly cheaper than the new one.
But the ‘very slight’ is literally just about a dollar-per-year less ($36/drive/year for used and $37.50/drive/year for new), which doesn’t really feel like it’s worth dealing with essentially unwarrantied, unknown, used and possibly abused drives.
You could of course get very lucky and get more than 8 years out of the used, or the new one could fail earlier or whatever but, statistically, they’re more or less equally likely to happen to the drives so I didn’t really bother with factoring in those scenarios.
And, frankly, at 8 years it’s time to yank the drives and replace them anyways because you’re so far down the bathtub curve it’s more like a slip n’ slide of death at that point.
It’s usable-ish, but still kinda crashy and prone to occasionally imploding.
I wouldn’t really use it as my sole daily driver, but for certain people doing certain things, it’s probably fine.
(It needs another year, honestly.)
$30 to not have to deal with Windows 11 for another year feels like the deal of the century.
I love how they’re like ‘but you won’t get new features!’. They may have still not figured out that nobody cares about ‘new features’ being stuffed into the OS, but I guess you can’t have everything.
The company reported nearly 100 million monthly users, an increase of 47% from the year prior
My bullshit-o-meter is pegged at 11.
I’ve been on reddit lately and I’ll eat my shoe if that 50% increase was in any way actual real humans, because there’s no damn way.
I’m going to get downvoted to hell for this but uh, I usually tell clients Squarespace is what they want these days.
Self-hosting something like Wordpress or Ghost or Drupal or Joomla or whatever CMS you care to name costs time: you have to patch it, back it up, and do a lot of babysitting to keep it up and secure and running. It’s very much not a ship-and-forget - really, nothing selfhosting is.
I’m very firmly of the opinion that small business people should be focused on their business, not their email or website or whatever, because any time you spend fighting your tech stack is time you could have been actually making money. It’s all a cost, it just depends if you value $20 a month or your time more.
If I had someone come to me asking to setup this stuff for their business, I’d absolutely tell them to use gSuite for email, file sharing, documents, and such and Squarespace for the website and then not worry about shit, because they’re both reliable and do what they say on the tin.
too high TDP, using above the MAX rate of 250 Watt
Agreed. Intel’s design philosophy seems to be ‘space heater that does math’ for some reason. That’s been true since at least 10th gen, if not before then. I don’t know if it’s just chasing benchmark wins at any cost, or if they’re firmly of the opinion that hot and loud is fine as long as it’s fast and no customers will care - which I don’t really think is true anymore - or what, but they’ve certainly invested heavily in CPUs that push the literal limits of physics while trying to cool them.
Intel always had the advantage of superior production
That really stopped being true in the Skylake era when TSMC leapfrogged them and Intel was doing their 14nm++++++++ dance. I mean they did a shockingly good job of keeping that node relevant and competitive, but the were really only relevant and competitive on it until AMD caught up and exceeded their IPC with Ryzen 3000.
about the same price
Yeah, if gaming is your use case there’s exactly zero Intel products you should even be considering. There’s nothing that’s remotely competitive with a 7800x3d, and hell, for most people and games, even a 5800x3d is overkill.
And of course, those are both last-gen parts, so that’s about to get even worse with the 9800x3d.
For productivity, I guess if you’re mandated to use Intel or Intel cpus are the only validated ones it’s a choice. But ‘at the same price’ is the problem: there’s no case where I’d want to buy Intel over AMD if they cost the same and perform similarly, if for no other reason than I won’t need something stupid like a 360mm AIO to cool the damn thing.
Guess he shouldn’t have told everyone who didn’t like how he was acting to leave, I guess?
Lenovo is, outside of their really cheap consumer options - like, the $500-and-under options - are pretty solid.
But yeah build quality is one reason when I roll my eyes at the ‘haha stupid buying apple! apple tax! lol ripped off!’ crowd: I mean maybe, but as soon as you pick up a Macbook whatever it’s immediately obvious that you’re getting something for what you’re paying, and not some bendy flexy piece of plastic crap that will maybe physically survive the warranty period, but not much more.
build quality on most $1000 laptops
You’re not kidding.
I have a couple of laptops from various vendors, and they’re all built like shit.
ASUS is especially eyerolly: the case is literally crumbling into pieces. Like seriously? You couldn’t have picked a material that’s not literally going to disintegrate in two years on a $1200 laptop?
They state the code will be released after the first orders ship, which makes a certain kind of sense given this is a competitive space suddenly.
Though, I 10000% agree that there’s no reason to take a leap of faith when you can just wait like, uh, a month, and see what they do after release. It’s not like they won’t still be selling these or something.
That’s because server offerings are real money, which is why Intel isn’t fucking those up.
AMD is in the same boat: they make pennies on client and gaming (including gpu), but dumptrucks of cash from selling Epycs.
IMO, the Zen 5(%) and Arrow Lake bad-for-gaming results are because uarch development from Intel and AMD are entirely focused on the customers that pay them: datacenter and enterprise.
Both of those CPU families clearly show that efficiency and a focus on extremely threaded workloads were the priorities, and what do you know, that’s enterprise workloads!
I think it’s less the era of x86 is ended and more the era of the x86 duopoly putting consumer/gaming workloads first has ended because, well, there’s just no money there relative to other things they could invest their time and design resources in.
I also expect this to happen with GPUs: AMD has already given up, and Intel is absolutely going to do that as soon as they possibly can without it being a catastrophic self-inflicted wound (since they want an iGPU to use). nVidia has also clearly stopped giving a shit about gaming - gamers get a GPU a year or two after enterprise has cards based on the same chip, and now they charge $2000* for them - and they’re often crippled in firmware/software so that they won’t compete with the enterprise cards as well as legally not being allowed to use the drivers in a situation like that.
ARM is probably the consumer future, but we’ll see who and with what: I desperately hope that nVidia and MediaTek end up competitive so we don’t end up in a Qualcomm oops-your-cpu-is-two-years-old-no-more-support-for-you hellscape, but well, nVidia has made ARM SOCs for like, decades, and at no point would I call any of the ones they’ve ever shipped high performance desktop replacements.