←back to thread

446 points talboren | 1 comments | | HN request time: 0s | source
Show context
muglug ◴[] No.45039093[source]
Improvements merged within the last two days by the WebKit team: https://github.com/orgs/community/discussions/170922#discuss...

For my sins I occasionally create large PRs (> 1,000 files) in GitHub, and teammates (who mostly all use Chrome) will sometimes say "I'll approve once it loads for me..."

replies(6): >>45039371 #>>45039546 #>>45039877 #>>45040882 #>>45043276 #>>45050622 #
patrickmay ◴[] No.45039371[source]
That seems essentially unreviewable. If you can share without violating an NDA, what kind of PR would involve that many files?
replies(6): >>45039414 #>>45039433 #>>45039455 #>>45039478 #>>45040667 #>>45041353 #
bob1029 ◴[] No.45039478[source]
"Upgrade solution from .NET Framework 4.8 => .NET 8"

"Rename 'CustomerEmailAddress' to 'CustomerEmail'"

"Upgrade 3rd party API from v3 to v4"

I genuinely don't get this notion of a "max # of files in a PR". It all comes off to me as post hoc justification of really shitty technology decisions at GitHub.

replies(2): >>45039963 #>>45041641 #
ambicapter ◴[] No.45039963[source]
It's not GitHub-specific advice, it's about reviewability of the PR vs. human working memory/maximum attention span.
replies(2): >>45040291 #>>45041083 #
eviks ◴[] No.45040291[source]
How much working memory/attention span is required to look through 1000 identical lines "-CustomerEmailAddress +CustomerEmail"?
replies(1): >>45040910 #
shadowgovt ◴[] No.45040910[source]
Ideally, you automate a check like that. Because the answer turns out to actually be "humans are profoundly bad at that kind of pattern recognition."

A computer will be able to tell that the 497th has a misspelled `CusomerEmail` or that change 829 is a regexp failure that trimmed the boolean "CustomerEmailAddressed" to "CustomerEmailed" with 100% reliability; humans, not so much.

replies(3): >>45041140 #>>45041171 #>>45047557 #
1. makeitdouble ◴[] No.45041140[source]
You're not just reviewing the individual lines, but also which context, and which files are impacted. And automating that part would still mean reviewing the automation and the 1000+ changes to move to it.

Sure 1000+ changes kills the soul, we're not good at that, but sometimes there's just no other decent choice.