Reducing cognitive load doesn't happen in a vacuum where simple language constructs trump abstraction/smart language constructs. Writing code, documents, comments, choosing the right design all depend upon who you think is going to interact with those artifacts, and being able to understand what their likely state of mind is when they interact with those artifacts i.e. theory of mind.
What is high cognitive load is very different, for e.g. a mixed junior-senior-principal high-churn engineering team versus a homogenous team who have worked in the same codebase and team for 10+ years.
I'd argue the examples from the article are not high cognitive load abstractions, but the wrong abstractions that resulted in high cognitive load because they didn't make things simpler to reason about. There's a reason why all modern standard libraries ship with standard list/array/set/hashmap/string/date constructs, so we don't have manually reimplement them. They also give a team who is using the language (a framework in its own way) common vocabulary to talk about nouns and verbs related to those constructs. In essense, it is reducing the cognitive load once the initial learning phase of the language is done.
Reading through the examples in the article, what is likely wrong is that the decision to abstract/layer/framework is not chosen because of observation/emergent behavior, but rather because "it sounds cool" aka cargo cult programming or resume-driven programming.
If you notice a group of people fumble over the same things over and over again, and then try to introduce a new concept (abstraction/framework/function), and notice that it doesn't improve or makes it harder to understand after the initial learning period, then stop doing it! I know, sunk cost fallacy makes it difficult after you've spent 3 months convincing your PM/EM/CTO that a new framework might help, but then you have bigger problems than high cognitive load / wrong abstractions ;)