This is a reply to an old comment
https://news.ycombinator.com/item?id=44452679 (since I cannot reply in the original thread)
> Even assuming python's foreach loop in these cases get optimized down to a very bare for loop, the operations being performed are dominated by the looping logic itself, because the loop body is so simple.
> Each iteration of a for loop performs one index update and one termination comparison. For a simple body that is just an XOR, that's the difference between performing 5 operations (update, exit check, read array, XOR with value, XOR with index) per N elements in the one loop case versus 7 operations (update, exit, read array, XOR with value, then update, exit, XOR with index) in the two loop case. So we're looking at a 29% savings in operations.
> It gets worse if the looping structure does not optimize to a raw, most basic for loop and instead constructs some kind of lazy collection iterator generalized for all kinds of collections it could iterate over.
> The smaller the loop body, the higher the gains from optimizing the looping construct itself.
Let's test your claims
import random
import time
n = int(1e7)
A = list(range(1,n+1))
random.shuffle(A)
print("Removed:", A.pop())
t = time.time()
result = 0
for idx,val in enumerate(A):
result ^= idx+1
result ^= val
result ^= n
print("1-loop:", time.time() - t)
print("Missing:", result)
t = time.time()
result = 0
for value in range(1, n + 1):
result ^= value
for value in A:
result ^= value
print("2-loop:", time.time() - t)
print("Missing:", result)
A sample run gives:
Removed: 2878763
1-loop: 1.4764018058776855
Missing: 2878763
2-loop: 1.1730067729949951
Missing: 2878763
And after swapping the order of the code blocks just to ensure there's nothing strange going on:
Removed: 3217501
2-loop: 1.200080156326294
Missing: 3217501
1-loop: 1.5053350925445557
Missing: 3217501
So indeed we have about a 20% speedup, only in the complete opposite direction that you claimed we'd have. Perhaps it's best not to assume when talking about performance.