Bringing it back to bats, a failure to imagine what it's like to be a bat is just indicative that the overlaps between human and bat modalities don’t admit a coherent gluing that humans can inhabit phenomenally.
There's something more to it than this.
For one thing there's a threshold of awareness. Your mind is constantly doing things and having thoughts that don't arrive to the threshold of awareness. You can observe more of this stuff if you meditate and less of this stuff if you constantly distract yourself. But consciousness IMO should have the idea of a threshold baked in.
For another, the brain will unify things that don't make sense. I assume you mean something like consciousness is what happens when there aren't obstructions to stitching sensory data together. But the brain does a lot of work interpreting incoherent data as best it can. It doesn't have to limit itself to coherent data.
> It doesn't have to limit itself to coherent data.
There are specific failure cases for non-integrability:
1. Dissociation/derealization = partial failures of gluing.
2. Nausea = inconsistent overlaps (ie: large cocycles) interpreted as bodily threat.
3. Anesthesia = disabling of the sheaf functor: no global section possible.
At least for me it provides a consistent working model for hallucinogenic, synesthesia, phantom limb phenomena, and split-brain scenarios. If anything, the ways in which sensor integration fails are more interesting than when it succeeds.
The way I look at it is that the sensors provide data as activations and awareness is some output with a thresholding or activation function.
Sense making and consciousness in my mental model is something that happens after the fact and it tries to happen even with nonsense data. As opposed to -- as I was reading you to be leaning toward -- being the consequence of sensory data being in a sufficiently nice relationship with each other.