Most active commenters
  • lapcat(3)

←back to thread

628 points kiyanwang | 17 comments | | HN request time: 0.626s | source | bottom
Show context
dockerd ◴[] No.43630691[source]
For those unable to open the link due to owner site being hit by Cloudflare limit, here's a link to web archive - https://web.archive.org/web/20250409082704/https://endler.de...
replies(2): >>43631005 #>>43631455 #
1. lapcat ◴[] No.43631455[source]
There's some irony, is there not, in presuming to be able to identify "the best programmers" when you've created a programming blog that completely falls down when it gets significant web traffic?
replies(8): >>43631537 #>>43631682 #>>43632057 #>>43632106 #>>43632144 #>>43632630 #>>43633134 #>>43636895 #
2. semiquaver ◴[] No.43631537[source]
Presumably the author didn’t claim that they were one of them :)
3. gorjusborg ◴[] No.43631682[source]
I think it is a fairly common trait of bad programmers to design a system based on completely unrealistic operating conditions (like multiple orders of magnitude of extra traffic).

Now that they've gotten the hug of death they'll probably plan for it next time.

replies(1): >>43633138 #
4. nocman ◴[] No.43632057[source]
> presuming to be able to identify "the best programmers"

He was identifying the best programmers he knows (as is obvious from the title). I don't think it is unreasonable at all for even a semi-technical person to be able to do that.

Also, it is highly likely that the author never expected their article to receive a high volume of web traffic, and allocated resources to it with that assumption. That doesn't say a thing about their technical abilities. You could be the best programmer in the world and make an incorrect assumption like that.

5. ergonaught ◴[] No.43632106[source]
> There's some irony, is there not

There is not.

6. udev4096 ◴[] No.43632144[source]
Most developers are terrible at system administration which is quite disappointing and is one of the reason that the author uses Clownflare. Being able to maintain systems is as important as writing code
replies(1): >>43632968 #
7. reverendsteveii ◴[] No.43632630[source]
I can identify a lion without being able to chase down and kill a gazelle on the hoof
replies(1): >>43632737 #
8. lapcat ◴[] No.43632737[source]
This is not a good analogy. Anyone can identify "a programmer". Identifying "the best programmers", or "the best lions" (in some respect) is an entirely different matter.
replies(1): >>43634542 #
9. ericrallen ◴[] No.43632968[source]
This is kind of a ridiculous take.

Not going to speak for the author, but some of us just want to be able to write a blog post and publish it in our free time. We're not trying to "maintain systems" for fun.

Some of those posts get zero views, and some of them end up on the front page of Hacker News.

replies(2): >>43633111 #>>43633213 #
10. lapcat ◴[] No.43633111{3}[source]
There is literally a "Submit to HN" button at the bottom of the blog post.

Moreover, the author appears to be a lot more serious than just a free time blogger:

https://web.archive.org/web/20250405193600/https://endler.de...

> My interests are scalability, performance, and distributed systems

> Here is a list of my public speaking engagements.

> Some links on this blog are affiliate links and I earn a small comission if you end up buying something on the partner site

> Maintaining this blog and my projects is a lot of work and I'd love to spend a bigger part of my life writing and maintaining open source projects. If you like to support me in this goal, the best way would be to become a sponsor

11. kwertyoowiyop ◴[] No.43633134[source]
The best programmers know that using the free resource of the Internet Archive is the optimal approach for their own effort and cost, versus making their own website scale for a temporary load? (Kidding…I think)
12. grayhatter ◴[] No.43633138[source]
How many ways are their to build a site that doesn't have these defects and risks?

Good engineers build things that eliminate failure modes, rather than just plan for "reasonable traffic". Short of DDoS, a simple blog shouldn't be able to die from reaching a rate limit. But given the site is dead, I can't tell, maybe it's not just a blog.

replies(1): >>43635611 #
13. udev4096 ◴[] No.43633213{3}[source]
I was talking about more than just a blog. It puts things into different perspective when you are writing a big program. For instance, you are tasked on creating a custom auth. Would you feel more comfortable after having used something like authentik or kanidm in the past or having no experience with it at all?
14. reverendsteveii ◴[] No.43634542{3}[source]
make it "I can ID a good baker without being able to make a wild-fermented bread myself" then. In any case, it's a proof of the pudding is in the eating thing: good programmers are defined as programmers that make good software, and good software is software that pleases users and provides functionality that they want. You don't need to be a programmer to know whether the software you're using is consistently good across its lifecycle. If it's bad at the outset it's bad at the outset and if it's not built maintainably and extensibly it will become bad over the course of its lifetime.
15. gorjusborg ◴[] No.43635611{3}[source]
> Good engineers build things that eliminate failure modes,

Yes, but not all failure modes, only the ones in scope for the goals of the system. From the outside you can't tell what the goals are.

There is no such thing as eliminating all failure modes, which was exactly the point I was making in my post above. The best you can do is define your goal clearly and design a system to meet the constraints defined by that goal. If goals change, you must redesign.

This is the core of engineering.

replies(1): >>43636500 #
16. grayhatter ◴[] No.43636500{4}[source]
> Yes, but not all failure modes, only the ones in scope for the goals of the system. From the outside you can't tell what the goals are.

Is basic availability not a goal of a blog?

Phrased differently: given two systems, one that fails if a theoretically possible, but otherwise "unpredictable" number requests arrive. And one without that failure mode. Which is better?

> From the outside you can't tell what the goals are.

I either don't agree, not even a tiny bit, or I don't understand. Can you explain this differently?

> This is the core of engineering.

I'd say the core of engineering is making something that works. If you didn't anticipate something that most engineers would say is predictable, and that predictable thing instead of degrading service, completely takes the whole thing down, such that it doesn't work... that's a problem, no?

17. mre ◴[] No.43636895[source]
Author here. The site was down because I'm on Cloudflare's free plan, which gives me 100k requests/day. I couldn't care less if the site was up for HN, honestly, because traffic costs me money and caches work fine. FWIW, the site was on Github Pages before and it handles previous frontpage traffic fine. So I guess if there were any irony in it, it would be about changing a system that worked perfectly well before. My goal was to play with workers a bit and add some server-side features, which, of course, never materialized. I might migrate back to GH because that's where my other blog, corrode.dev, is and I don't need more than that.