←back to thread

669 points danso | 3 comments | | HN request time: 0.765s | source
Show context
_bxg1 ◴[] No.23260967[source]
This is the latest in a string of incidents where critical software systems, facing new pressure due to the pandemic, are catastrophically failing their users. I think what's happened in the past is that most public-facing software systems either a) were not really critical (because people had the alternative of doing things in-person), or b) (as in the case of all the ancient COBOL systems underpinning the US gov) had been made reliable over the years through sheer brute force as opposed to principled engineering. But in the latter case, as we saw with New Jersey's unemployment system, that "reliability" was fragile and contingent on the current state of affairs, and had no hope of withstanding a sudden shift in usage patterns.

Now we have various organizations - governmental and otherwise - hastily setting up online versions of essential services and it seems like every single one of them breaks on arrival.

We need some sort of standard for software engineering quality. I don't think this is an academic question anymore. Real people's lives are being impacted every day now by shoddy software, and with the current crisis they often have no alternative. Software that you or I could probably have executed better, but that the people who were hired to do it either a) couldn't, or b) didn't bother. It's nearly impossible for non-technical decision makers in these orgs to evaluate the quality of the systems they've hired people to build. We need quality assurance at an institutional level.

If not governmental, maybe an organization around this could be made by developers themselves. Not the "certified for $technology" certifications we have now, but a certification of fundamental software engineering skills and principles. A certification you can lose if you do something colossally irresponsible. At the end of the day, this dilution of quality is having a negative impact on our job field, so it concerns all of us. It leads to technical debt, micro-management, excessively rigid deadlines and requirements, which we all have to deal with. All of these are either symptoms of or coping mechanisms for management's inability to evaluate engineering quality.

replies(15): >>23261019 #>>23261187 #>>23261210 #>>23261239 #>>23261289 #>>23261414 #>>23261666 #>>23261696 #>>23261835 #>>23261851 #>>23261876 #>>23262059 #>>23262102 #>>23262525 #>>23263763 #
1. AlchemistCamp ◴[] No.23261239[source]
I think you're correct in your assessment that top-down bureaucracies really struggle with software but I don't think the solution is to inject a top-down bureaucratic gatekeeper in the path of every software career.
replies(1): >>23261283 #
2. _bxg1 ◴[] No.23261283[source]
I'm only talking about creating a certification, not enforcing which orgs do and don't use it. A lot of software isn't important enough for such a thing, but a lot of it is. The point is that even when decision-makers do want software to be highly reliable, they have nothing but very blunt instruments for attempting to enforce that, because they're working in the dark.
replies(1): >>23263677 #
3. ativzzz ◴[] No.23263677[source]
The same technologically incompetent leaders who manage failed software projects are going to be the ones to write these standards/certifications.

The real problem is a lack of technologically competent leadership. Many of the skills required to excel at technology do not overlap with the skills required to be a good leader. Then, both technology and leadership are difficult skills to train and develop individually. And lastly, the few people who are competent technological leaders would rather work for big tech where they will get paid so much more and would not have to fight with technologically incompetent leadership to set up good standards.