High functioning autism exists, but autism in general doesn't seem to give any advantage to general intelligence. And the low end of functioning in autism is really, really low.
It's a rare breed (1-2%) of the population that will actively push back, insist on facts, and stick to only the "hard, unyielding reality" of physics, chemistry, mathematics, physics, logic, etc...
There is a very high correlation between these types of people and autistic people.
You have to not care about how other people "feel" or what their conflicting priorities might be to prioritise reality above the personal whims of others.
To be truly intelligent, you have to be able to call the emperor naked.
PS: It's easy to disagree with the above, but this is invariably an instance of "the fish is the last to know it lives in water" idiom. Something like 80% of the adult population goes along with Santa for Grownups because of peer pressure, also known as "mainstream religions". Don't get me started on partisan voting against one's own interests. Etc...
Zero push-back? Or zero push-back in front of the rest of the group?
Humans are pack animals, highly evolved for social connection, and ostracism can be life threatening. The benefits of group membership and cohesion are enough that it is worth tolerating some mistakes and suboptimal outcomes because over time the expected utility for individuals and in the aggregate is much higher when people are working together harmoniously as a group.
The problem is that we have one set of wiring, one set of instincts, and one set of common social behaviours. These just don’t work in “unnatural” scenarios for which we aren’t evolved, such as pure mathematics or computer science.
The maths just doesn’t care about your seniority and a proof is a proof irrespective of the age of the author.
To truly excel in those “hard sciences” the default wiring isn’t optimal.
The article states that non-default wiring has the downside of also causing autism.
Cause (1) cannot usually be resolved without some sort of technological innovation.
Cause (2) is quite interesting because it is a social problem.
For example, someone comes to you with a markov decision problem and insists that no form of reinforcement learning could be a viable solution. Why would they do this? Probably because their understanding of RL differs from yours. Or your understanding of the problem differs from theirs. This can be solved by communication.
Stated differently, the topology of your “semantic map” of the domain differs from theirs. To resolve it you must be able to obtain an accurate mapping of their local topology around the point of disagreement onto yours.