The argument isn’t about whether machines can think, but about whether computation alone can generate understanding.
It shows that syntax (in this case, the formal manipulation of symbols) is insufficient for semantics, or genuine meaning. That means whether you're a machine or human being, I can teach you every grammatical rule or syntactical rule of a language but that is not enough for you to understand what is being said or have meaning arise, just like in his thought experiment. From the outside it looks like you understand, but the agent in the room has no clue what meaning is being imparted. You cannot derive semantics from syntax.
Searle is highlighting a limitation for computationalism and the idea of 'Strong AI'. No matter how sophisticated you make your machine it will never be able to achieve genuine understanding, intentionality, or consciousness because it operates purely through syntactic processes.
This has implications beyond the thought experiment, for example, this idea has impacted Philosophy of Language, Linguistics, AI and ML, Epistemology, and Cognitive Science. To boil it down, one major implication is that we lack a rock-solid understanding or theory of how semantics arises, whether in machines or humans.