For my sins I occasionally create large PRs (> 1,000 files) in GitHub, and teammates (who mostly all use Chrome) will sometimes say "I'll approve once it loads for me..."
For my sins I occasionally create large PRs (> 1,000 files) in GitHub, and teammates (who mostly all use Chrome) will sometimes say "I'll approve once it loads for me..."
"Rename 'CustomerEmailAddress' to 'CustomerEmail'"
"Upgrade 3rd party API from v3 to v4"
I genuinely don't get this notion of a "max # of files in a PR". It all comes off to me as post hoc justification of really shitty technology decisions at GitHub.
A computer will be able to tell that the 497th has a misspelled `CusomerEmail` or that change 829 is a regexp failure that trimmed the boolean "CustomerEmailAddressed" to "CustomerEmailed" with 100% reliability; humans, not so much.
Or that you had to avoid Ctrl+F "CustomerEmail" and see whether you had 1000 matches that matches the number of changed files or only 999 due to some typo.
Or using the web interface to filter by file type to batch your reviews.
Or...
Just that in none of those cases there is anything close to our memory/attention capacity.
I work in a large C++ codebase and a rename like that will actually just crash my vscode instance straight-up.
(There are good automated tools that make it straightforward to script up a repository-wide mutation like this however. But they still generate PRs that require human review; in the case of the one I used, it'd break the PR up into tranches of 50-ish files per tranche and then hunt down individuals with authority to review the root directory of the tranche and assign it to them. Quite useful!)