←back to thread

123 points eterm | 4 comments | | HN request time: 0s | source
Show context
eterm ◴[] No.43925356[source]
A post in which I try to rubber-duck a CoreWCF issue I've been having, because stackoverflow no longer seems suitable for asking questions about programming issues.

Screaming into the void of the blogosphere is catharsis for getting my SO question closed.

And because I know you're all nosy, the SO question is here: https://stackoverflow.com/questions/79605462/high-cpu-usage-... . Please feel free to point out more ways in which I screwed up asking my SO question.

replies(10): >>43925551 #>>43925669 #>>43925930 #>>43925975 #>>43926332 #>>43927351 #>>43931071 #>>43933405 #>>43933839 #>>43935803 #
matsemann ◴[] No.43925669[source]
Honestly, I agree with it not being a good fit for a Q&A site. It's a debugging problem, probably needing a discussion, and might even not be of any use to others being that "high cpu" is kinda vague. Seems better suited for a bug report / issue tracker of the relevant library.
replies(5): >>43925828 #>>43925881 #>>43925895 #>>43926071 #>>43932335 #
wokwokwok ◴[] No.43925895[source]
How can a question that is:

1) clearly technical

2) reproducible

3) has a clear failure condition

Not be a suitable candidate for S/O?

Did we step into a dimension where only "How do I print('hello world')?" is a valid question while I wasn't watching, because it has a trivial one-line answer?

Hard questions doesn't mean they're bad, it just means many people aren't competent answer them. The same goes for obscure questions; there might just not be many people who care, but the question itself is entirely valid.

Does that mean they're not suitable for S/O?

I... can't believe anyone seriously believes that hard niche problems are too obscure or too hard for S/O to be bothered to grace themselves with.

It's absurd.

It just baffles me that a question that might take some effort to figure an answer out to might 'not be suitable' to S/O.

replies(2): >>43925997 #>>43926561 #
1. zahlman ◴[] No.43926561[source]
The problem with the question as originally asked is not the difficulty or "obscurity".

The problem is complexity and scope.

We don't debug code for others. We expect them to find the specific part of the code that is causing a problem and showcase a minimal reproducible example. For performance issues, we expect them to profile code and isolate bottlenecks - and then they can ask a hard, obscure question about the bottleneck. Or a very easy one, as long as it's something that could make sense to ask after putting in the effort.

In short: we're looking for a question, not a problem. Stack Overflow "can't be bothered to grace itself with" hard niche problems, or with easy common problems. But it is about answering the question that results from an analysis of a problem. Whether that's understanding the exact semantics of argument passing, or just wanting to know how to concatenate lists.

And we're looking for one question at a time. If there are multiple issues in a piece of code, they need to be isolated and asked about separately. If the task clearly breaks down into a series of steps in one obvious way, then you need to figure out which of those steps is actually causing a problem first, and ask about whichever steps separately. (Or better yet, find the existing Q&A.)

(Questions seeking to figure out an algorithm are usually okay, but usually better asked on e.g. cs.stackexchange.com. And usually, an algorithm worth asking about isn't just "do X, then do Y, then do Z".)

Stack Overflow is full of highly competent people who are yearning for questions that demand their specific expertise - recently, not just in the 2010s.

Most questions I've asked since 2020 were deliberate hooks to deal with common beginner-level issues or close FAQs that didn't already have a clear duplicate target. (I've stopped contributing new Q&A, but still occasionally help out with curation tasks like editing.) But I asked https://stackoverflow.com/questions/75677825 because I actually wanted an answer, and it's an instructive example here.

Answering it required detailed expert-level knowledge of modern CPU architectures and reverse engineering of the Python implementation. Asking it required noticing a performance issue, then putting extensive effort into simplifying the examples as much as possible and diagnosing the exact qualities of the input that degrade performance - as well as ruling out other simple explanations and citing the existing Q&A about those.

But demonstrating it requires nothing more than a few invocations of the `timeit` standard library module.

replies(2): >>43933438 #>>43933471 #
2. xeromal ◴[] No.43933438[source]
This mentality is probably why SO is dying a slow death
3. int_19h ◴[] No.43933471[source]
If that is the current culture on SO, that's very unfortunate. Back when I was active on it - in late 00s to mid 10s - I absolutely did "debug code for others" when the problem was interesting enough to warrant it.
replies(1): >>43936770 #
4. zahlman ◴[] No.43936770[source]
> Back when I was active on it - in late 00s to mid 10s - I absolutely did "debug code for others" when the problem was interesting enough to warrant it.

Yes, lots of people did, including myself.

When I got tired of it, and then came back years later and seen what had happened to the Q&A, I understood that it had been a mistake to do so, and eventually realized - through extensive research on the site meta, Stack Exchange meta, etc. - that it was the same mistake that the site had originally specifically sought to avoid.

And I saw that there had been years of arguing over the labels for close reasons - leading for example to the retirement of "not a real question", and the more or less direct replacement of "too broad" with "needs more focus", and the rather more approximate replacement of "too localized" with "not reproducible or caused by a typo" - because of a collective realization of the real purpose of closing questions, and of the value of being selective. Not just in terms of answer-writers getting frustrated - because we also discovered that some people just don't, and are happy to spend amazing amounts of time trying to read the minds of people who can barely put together a coherent sentence and turn out to be asking about the same common issue for the N+1th time.

And I saw that there had been years of arguing over whether expecting a "minimum level of understanding" was the right phrasing (it's really about the effect that has on question-writing, not just on whether the OP is likely to be able to understand a correctly-written answer - although that does weigh in the calculation), which led to a trial close reason being implemented for a couple of weeks in 2013.

And I saw that there had been years of arguing over whether question difficulty (in either direction) is a disqualifying factor (it isn't, but we won't write a tutorial instead of answering a concrete question, nor will we complete multi-part coding work to order), or the reason why the OP wants an answer (generally not relevant, but see e.g. https://meta.stackoverflow.com/questions/334822 https://meta.stackoverflow.com/questions/284236 https://meta.stackoverflow.com/questions/326569 https://meta.stackoverflow.com/questions/329321).

And I saw that by the time I got back, quite a few things had been more or less settled and reasoned out, but that the general community was not on the same page as the people who had been actually thinking about these things. Obviously I didn't agree with everything immediately, and obviously there are still disagreements among those who are broadly speaking on the same page. But I could see the vision.

And I realized that before I left, I had been using the site without putting any effort into trying to understand it. Like most users, I had been "the general community".

(And then I saw that there was tons of unpleasantness between users and the company itself, and unfit-for-purpose site software, and a totally broken reputation system that had never been properly reconsidered. Which is how I ended up checking out Codidact as an alternative. But most of what I say about Stack Overflow isn't really about Stack Overflow; it's about "the Stack Exchange model" - the "Q&A site" as I understand it - which alternative sites still implement.)

We didn't get here spontaneously. Everything was extensively discussed and the discussion is extensively documented, with carefully considered rationale where possible.

> If that is the current culture on SO, that's very unfortunate.

Obviously I disagree.