> The flaw of the Chinese Room argument is the need to explain why it does not apply to humans as well.
But it does - the thought experiment continues by supposing that I gave a human those lookup tables and instructions on how to use them, instead of having the computer run the procedure. The human doesn't understand the foreign language either, not in the same way a native speaker does.
The point is that no formal procedure or algorithm is sufficient for such a system to have understanding. Even if you memorized all the lookup tables and instructions and executed this procedure entirely in your head, you would still lack understanding.
> Does a single neuron "understand" Chinese? 10 neurons? 100? 1 million?
This sounds like a sorites paradox [0]. I don't know how to resolve this, other than to observe that our notions of "understanding" and "thought" and "intelligence" are ill-defined and more heuristic approximations than terms with a precise meaning; hence the tendency of the field of computer science to use thought experiments like Turing's imitation game or Searle's Chinese room as proxies for assessing intelligence, in lieu of being able to treat these terms and ideas more rigorously.
[0] https://plato.stanford.edu/entries/sorites-paradox/