As a mathematical theory, probability is well-defined. It is an application of a larger topic called measure theory, which also gives us the theoretical underpinnings for calculus.
Every probability is defined in terms of three things: a set, a set of subsets of that set (in plain language: a way of grouping things together), and a function which maps the subsets to numbers between 0 and 1. To be valid, the set of subsets, aka the events, need to satisfy additional rules.
All your example p(X) = 0.5 says is that some function assigns the value of 0.5 to some subset which you've called X.
That it seems to be good at modelling the real world can be attributed to the origins of the theory: it didn't arise ex nihilo, it was constructed exactly because it was desirable to formalize a model for seemingly random events in the real world.