To understand why your totally understandable conclusion is wrong, it helps me to think about what it means to determine the average number of other winners when I win.
The reason is similar to the Monty Hall Problem (https://en.wikipedia.org/wiki/Monty_Hall_problem)
To understand, lets think about the simplest representation of this problem... 2 people playing the lottery, and a 50/50 chance to win.
So, we can map out all the possible combinations:
A wins (50%) and B wins (50%) - 25% of the time
A wins (50%) and B loses (50%) - 25% of the time
A loses (50%) and B wins (50%) - 25% of the time
A loses (50%) and B loses (50%) - 25% of the time
So we have 4 even outcomes, so to figure out the average number of winners, we just add up the total number of winners in all the situations and divide by 4... so two winners in the first scenario, plus one winner in scenario 2, plus one winner in scenario 3, and zero winners in scenario 4, for 4 total winners in all situations... divide that by 4, and we see we have an average of 1 winner per scenario.
This makes sense... with 50/50 chance of winning with 2 people leads to an average of 1 winner per draw.
Now lets see what happens if we check for situations where player A wins; in our example, that is the first two scenarios. We throw out scenario 3 and 4, since player A loses in those two scenarios.
So scenario one has 2 winners (A + B) while scenario two has 1 winner (just A)... so in two (even probability) outcomes where A is a winner, we have a total of 3 winners... divide that 3 by the two scenarios, and we get an average of 1.5 winners per scenario where A is a winner.
Why does this happen? In this simple example it is easy to see why... we removed the 1/4 chance where we have ZERO winners, which was bringing down the average.
This same thing happens no matter how many players and what the odds are... by selecting only the scenarios where a specific player wins, we are removing all the possible outcomes where zero people win.