I’ll give an example. At my previous company there was a program where you basically select a start date, select an end date, select the system and press a button and it reaches out to a database and pulls all the data following that matches those parameters. The horrors of this were 1. The queries were hard coded.
-
They were stored in a configuration file, in xml format.
-
The queries were not 1 entry. It was 4, a start, the part between start date and end date, the part between end date and system and then the end part. All of these were then concatenated in the program intermixed with variables.
-
This was then sent to the server as pure sql, no orm.
-
Here’s my favorite part. You obviously don’t want anyone modifying the configuration file so they encrypted it. Now I know what you’re thinking at some point you probably will need to modify or add to the configuration so you store an unencrypted version in a secure location. Nope! The program had the ability to encrypt and decrypt but there were no visible buttons to access those functions. The program was written in winforms. You had to open the program in visual studio, manually expand the size of the window(locked size in regular use) and that shows the buttons. Now run the program in debug. Press the decrypt button. DO NOT EXIT THE PROGRAM! Edit the file in a text editor. Save file. Press the encrypt button. Copy the encrypted file to any other location on your computer. Close the program. Manually email the encrypted file to anybody using the file.
XML-DOM page templates stored in a database, line by line.
So rendering a page started with:
select * from pages
where page_id = ‘index’
order by line_number asc;
Each line of XML from each record was appended into a single string. This string was then XSLT transformed to HTML, for every page load.
This has to be one of the worst ways to reinvent a filesystem that I’ve ever heard. At the very least, storing static data in an relational database at this scale should be a slappable offense.
For anyone who knows and understands Android development, process death, and saved state…
The previous dev had no understanding of any of it, and had null checks with returns or bypassing important logic littered all over the app, everywhere.
I could only assume he didn’t understand how all these things were randomly null or why it was crashing all the time so he thought oh, i’ll just put a check in.
Well, you minimize that app for a little bit, reopen it, and every screen was fucked visually and unusable, or would outright crash. It was everywhere. This was before Google introduced things like view models which helped but even then for awhile weren’t a full solution to the problem.
It was many many months of just resolving these problems and rewriting it the correct way to not have these problems.
Sounds like a motel horror story.
deleted by creator
I don’t have any specific examples, but the standard of code is really bad in science. I don’t mean this in an overly judgemental way — I am not surprised that scientists who have minimal code specific education end up with the kind of “eh, close enough” stuff that you see in personal projects. It is unfortunate how it leads to code being even less intelligible on average, which makes collaboration harder, even if the code is released open source.
I see a lot of teams basically reinventing the wheel. For example, 3D protein structures in the Protein Database (pdb) don’t have hydrogens on them. This is partly because that’ll depend a heckton on the pH of the environment that the protein is. Aspartic acid, for example, is an amino acid where its variable side chain (different for each amino acid) is CH2COOH in acidic conditions, but CH2COO- in basic conditions. Because it’s so relative to both the protein and the protein’s environment, you tend to get research groups just bashing together some simple code to add hydrogens back on depending on what they’re studying. This can lead to silly mistakes and shabby code in general though.
I can’t be too mad about it though. After all, wanting to learn how to be better at this stuff and to understand what was best practice caused me to go out and learn this stuff properly (or attempt to). Amongst programmers, I’m still more biochemist than programmer, but amongst my fellow scientists, I’m more programmer than biochemist. It’s a weird, liminal existence, but I sort of dig it.
I’ve seen a many-to-many relationship written as a column of CSV ids.
Also same ppl used “ID” instead of “id” (sometimes Id even) which made ORMs cry hard
Half our ids are called ‘number’ sooo. Also our entire in-database translation system relies on guids that are not foreign keys. The only reason our ORM doesn’t flip on that is because it’s completely custom made with semi-autogenerated stored procedures resolving that translation in-database (using yet another SP).
We are at 2696 stored procedures right now, most of those are simple CRUD (can’t do straight selects on our tables because of the translations, so every select with different parameters is a SP)
I think the worst software-gore I remember seeing was a web app that dumped all the data to the browser as a huge XML file and then had JavaScript translate the contents of the xml into views. That probably wouldn’t even sound that far off the reservation now if it was JSON, thanks to the sleepless efforts of the JavaScript industrial complex, but back then you’d just render pages and return them.
The encryption thing is definitely weird/crazy and storing the SQL in XML is kinda janky, but sending SQL to a DB server is literally how all SQL implementations work (well, except for sqlite, heh).
ORMs are straight trash and shouldn’t be used. Developers should write SQL or something equivalent and learn how to properly use databases. eDSLs in a programming language are fine as long as you still have complete control over the queries and all queries are expressable. ORMs are how you get shit performance and developers who don’t have the first clue how databases work (because of leaky/bad abstractions trying to pretend like databases don’t require a fundamentally different way of thinking from application programming).
Orm are a way to handle seamlessly the model aspect of a codebase. But I agree.
My first big project (Symfony, with doctrine orm), we had to have several SQL requests made by hand due to the complexity of the databases here and there. So we were kept on our toes when it came to database knowledge haha
Oh boy, this one was a doozy…
Was working at a very big company named after a rainforest on smart home products with integrations for a certain home assistant…
New feature was being built that integrates the aforementioned home assistant with customer’s printers so they can ask the assistant to print stuff for them.
The initial design lands from our partner team with a Java backend service fairly nicely integrated with some CUPS libraries for generating the final document to be sent to the customer’s printer. All good.
They are about to launch when… uh oh… the legal team notices an AGPL licensed package in one of the CUPS library’s dependencies that was absolutely required for the document format needed by the project and the launch is cancelled.
So the team goes off in a panic looking for alternatives to this library and can’t find any replacements. After a month or two they come back with their solution…
Instead of converting the document directly in the backend service with the linked CUPS library (as AGPL is a “forbidden license” at this company) the backend uploads the initial document to an S3 bucket, then builds a CUPS document conversion bash shell script using some random Java library, the shell script is then sent (raw) to a random blank AWS host that comes prepackaged with CUPS binaries installed (these hosts were not automated with CI/CD / auto updates as was usually mandated by company practice because updating them might remove the CUPS binaries, so they required a ton of manual maintenance over the service’s lifetime…), the bash shell script is then executed on that “clean” host, downloading the document from S3, converting it via the CUPS command line binary, then reuploading it to another S3 bucket where the Java backend picks it up and continues the process of working the document through the whole backend pipeline of various services until it got to the customer’s printer.
This seemed to satisfy the legal team at the very least, and I have no doubt is probably still in production today…
The kicker though? After all those months of dev work from a whole team (likely all on 6 figure salaries), and all the time spent by various engineers including myself on maintenance and upkeep on that solution after it was transferred to us?
An alternative, completely unrestricted corporate license was available for the package in question for about $100 per year so long as you negotiated it with the maintainers.
But that was a completely unacceptable and avoidable cost according to upper management…
Wait 100 per year total or 100 per seat per year? If it’s per seat I can understand, if it’s total wtf…
$100 total, per year… as a FOSS enthusiast, it made me very angry that such a rich company was so petty over such a small cost for a product that raked in multiple millions of dollars per year 😾
Yeah that’s fucked up. From two perspectives 1. Who ever wrote that library needs money to survive. 2. From the company point of view they wasted WAY more money on the development then the license. Hell if 1 developer spent a day to do it, they paid more than they would for the license
The first time something goes wrong with that complicated setup, it probably pays for a
decadehalf a century or more of it’s fee.
I got forcefully moved onto another team at work. They use Observables to replace signals, change detection, local storage, and even function calls. Every single component is a tangled mess of Observables and rxjs. Our hotlist has over 300 bugs, and the app is like 6 months old.
I’ve been looking for a new team
I had some absolutely beautiful RxJava2 chains in an app I worked on once. Can definitely be abused and done poorly though.
There’s a part of me that kind of feels like this could work if you just do it right. Like the idea is kind of cool, in a way.
Unfortunately, it results in a dependency tree that resembles the tangled power lines in Bangladesh. Especially when half the code base is written by new devs using GAI and there isn’t a design doc in sight
There was a website where users could request something or other, like a PDF report. Users had a limited number of tokens per month.
The client would make a call to the backend and say how many tokens it was spending. The backend would then update their total, make the PDF, and send it.
Except this is stupid. First of all, if you told it you were spending -1 tokens, it would happily accept this and give you a free token along with your report.
Second of all, why is the client sending that at all? The client should just ask and the backend should figure out if they have enough credit or not.
I agree but I would say if there are variable token costs depending on report it would be nice if client sent request to server, server calculates x tokens to be used, sends x to client, client confirms that’s acceptable, server does work.
Like if I expected a report to be 2 tokens but because of some quirk or a typo or something it cost 200 tokens I would like a chance to cancel it if it’s not worth it.
Back in the day, a C program to handle estimating procurement costs for complex government contracts. We had to figure out the code and write in in a different language. It was just one giant loop, no functions, with variables named V1, V2, V3, etc. Hundreds and hundreds of them. I still shudder at the horror of it all.
I worked on a laser seam welder which basically was programmed in a mix of g code and I guess vb??
The fun part was variables could only be numbers between 100 to 999. So let’s say you have a sensor and need to verify it’s within a certain range. You could set #525 to 10 and #526 to 20 then say #527 = sensor 1 signal. Now lower down you verify it as if(#525 > #527 || #526 < #527){show error}
Now you could create each variable at the beginning with comment of what it was but then have to keep referring to the top to remind yourself what number was what. Or create the variable at first use so it was closer but now it’s spread across the document.
I went with first case and just printed out the first 2 pages which listed all the variables.
Before you ask, I talked to the guy who wrote the language and made the system many times he confirmed you couldn’t use variable names.
G Code is basically a geometric scripting languge and isn’t Turing complete in basic implementations. Every manufacturer pretty much also has their own dialect that is Turing complete.
Gcode with control commands and variables is called, no shit, Macro G Code. It’s Turing complete. That form of variable names is normal and is inherited from hardware registers/banks and TTL.
It’s not unusual for a save dialog to be labelled Punch as it has a direct lineage from punch tape.
Kind of like assembly and a graphing calculator had an abortion together.
I wonder at what point it would be easier to make a compiler to convert variable names into those numbers
We had some super old code in our company monorepo that was written by someone who became the CTO, there was a comment forbidding people from writing private methods in the code base because “we aren’t babies”. It explained so much about the awful code and why everything was crazy.
Access modifiers are definitely something I despise about OOP languages, though I understand that OOP’s nature makes them necessary.
That sounds like someone who didn’t understand the purpose of private
Yet he was still in charge of all the engineers who did. He had people actively working against their best interests lol. Disaster
Yeah, that just seems like a recipe for disaster.
This was then sent to the server as pure sql, no orm.
ORMs are overrated.
Yeah but simply using entity framework would of made the configuration file a list of systems.
Joined a new team and one of my first tasks was a refactor on a shared code file (Java) that was littered with data validations like
if ("".equals(id) || id == null) { throw new IllegalArgumentException() }The dev who wrote it clearly was trying to make sure the string values were populated but they apparently A) didn’t think to just put the null check first so they didnt have to write their string comparison so terribly or else didnt understand short circuiting and B) didn’t know any other null-safe way to check for an empty string, like, say StringUtils.isEmpty()
I mean… That’s bad but not on the same scale of some of these other issues.
Sure. There were worse problems to. SQL injection vulnerabilities, dense functions with hundreds of lines of spaghetti code, absolutely zero test coverage on any project, etc. That’s just the easiest to show an example of and it’s also the one that made me flinch every time I saw it.
"".equals()😨If it makes you feel better at my last company I asked the “senior validation specialist” what the validation path would be for a program which incorporated unit tests.
The answer I got was “what’s a unit test?”
🥲





