It seems like it's a lot harder to measure whether your docs are helping people make good decisions than it is to measure whether they are helping people successfully accomplish a task. I think we optimize for task-based/procedural docs because the business needs us to prove our value, and there is a need for this type of documentation, and there are lots of ways to measure and report on it over short timelines. But answering the question of, "Did this docset help someone build the right thing in the right way", I mean...organizations struggle to answer this question about their own products, abstracting that to try and measure the effectiveness of your docs seems super fuzzy.
Which is not to say you can't write docs that do this, just that it seems very hard to use numbers to prove that you have done so. I definitely think I could rank how well different docsets support users who need to make decisions, and I could offer up explanations to support my reasoning, but I don't know how to quantify that for the business.
I wonder how the structure of a docset that is designed to support decisions differs from that of a docset that supports tasks. I expect you'll have the same main categories (conceptual, reference, guides) but maybe a lot more conceptual docs, and more space dedicated to contextualizing the concepts. I would expect to see topics become more interdependent, more cross-references, etc.