Well, I simulated it and the numbers seem to agree with you. The example is interesting. I still have trouble seeing why my original reasoning doesn't hold though. I'll give an example, if anyone can clear up the issue that would be appreciated.
1/10 odds, 10 entrants, one winner expected on average.
Given a particular winner:
Expect: 0.9 + 1 winners
Given the same particular loser:
Expect: 0.9 winners
Over all cases we see:
0.1(1.9) + 0.9(0.9) = 1 winners
Checks out, but if the numbers are correct then any winner should be able to calculate the higher average and be right knowing only that there is at least one winner. So in cases where there is at least one winner:
P(winners>=1)=1-(9/10)^10=~65%
The expectation should work out to 1.9. The rest of the time we expect zero winners. However if I use those numbers I get an overall expected number of winners as 1.237, which has increased the overall number of winners across all cases. In order for that number to work out to one, the expected winners when there is at least one winner would have to be ~1.535. Which suggests that the expected outcome is different depending on if you check your own ticket, or someone else's, even if you see the same thing?
Am I just not on for math today? I thought the solution to the paradox would be that the higher expectation discounts outcomes with zero winners.