Does anyone have concrete information?
Does anyone have concrete information?
For my sins I occasionally create large PRs (> 1,000 files) in GitHub, and teammates (who mostly all use Chrome) will sometimes say "I'll approve once it loads for me..."
A lot of the time we just break the branch permissions on the repo we are using and run release branches without PRs and ignore the entire web interface.
It's a product of many cooks and their brilliant ideas and KPIs, a social network for devs and code being the most "brilliant" of them all. For day to day dev operations is something so mediocre even Gitlab looks like the golden standard compared to Github.
And no, the problem is not "Rails" or [ insert any other tech BS to deflect the real problems ].
[1]: https://yoyo-code.com/why-is-github-ui-getting-so-much-slowe...
Slow as hell and the Safari search function stopped working. I loaded the same url on Firefox and it was insta-fast.
I see loading spanner everywhere and even the page transition take ages compared to before.
I am not sure what metric they are using justify ditching the perfectly working SSR they used before.
"Rename 'CustomerEmailAddress' to 'CustomerEmail'"
"Upgrade 3rd party API from v3 to v4"
I genuinely don't get this notion of a "max # of files in a PR". It all comes off to me as post hoc justification of really shitty technology decisions at GitHub.
- Project managers putting constant pressure on developers to deliver as fast as possible. It doesn't even matter if velocity will be lost in the future, or if the company might lose customers, or even if it breaks the law.
- Developers pushing back on things that can backfire and burning political capital and causing constant burnout. And when things DO backfire, the developer is to blame for letting it happen and not having pushed it more in the first place.
- Developers who learned that the only way to win is by not giving a single fuck, and just trucking on through the tasks without much thought.
This might sound highly cynical, but unfortunately this is what it has become.
Developers are way too isolated from the end result, and accountability is non-existent for PMs who isolate devs from the result, because "isolating developers" is seem as their only job.
EDIT: This is a cultural problem that can't be solved by individual contributors or by middle management without raising hell and putting a target on their backs. Only cultural change enforced by C-Levels is able to change this, but this is *not* in the interest of most CEOs or CTOs.
If you put a lot of momentum behind a product with that mentality you get features piled on tech debt, no one gets enthusiastic about paying that down because it was done by some prior team you have no understanding of and it gets in the way of what management wants, which is more features so they can get bonuses.
Speaking up about it gets you shouted down and thrown on a performance improvement plan because you aren't aligned with your capitalist masters.
My CPU goes to 100% and fans roaring every time I load the dashboard and transactions. I can barely click on customers/subscriptions/etc. I can't be the only one...
If a developer has to put up a fight in order to push back against the irresponsibility of a non-technical person, they by definition don't have ownership.
Good to know others are feeling it too, hopefully it can get resolved soon. In the mean time, i'll try my PR reviews on FF.
Update: Just tested my big PR (+8,661, -1,657) on FF and it worked like a charm!
But there is also the Safari Technology Preview, which installs as a separate app, but is also a bit more unstable. Similar to Chrome Canary.
> publicly disseminate information regarding the performance of the Cloud Products
https://web.archive.org/web/20210624221204/https://www.atlas...
Their "solution" was to enable SSR for us ranters' accounts.
Clean code argues that instead of total rewrites you should focus on gradual improvements over time, refactor code so that overtime you pay off the dividends, without re-living through all the bugs you lived through 5 years ago that you don't recall the resolution of. Every rewrite project I've ever worked on, we run into bugs we had already fixed years prior, or the team before me has.
There are times when a total rewrite might be the best and only options such as deprecated platforms (think of like Visual Basic 6 apps that will never get threading).
What frustrates me more is that GitHub used to be open to browse, and the search worked, now in their effort to force you to make an account (I HAVE LIKE TEN ALREADY) and force you to login, they include a few "dark patterns" where parts of search don't work at all.
It's hard to know which member of the duopoly is more guilty for breaking GitHub for me, but I find that blaming both often guarantees success.
I could like, buy a new computer and stuff. But you know, the whole Turing complete thing feels like a lie in the age of planned obsolescence. So web standards are too.
The fact that they have this ability / awareness and haven't completely reverted by now is shocking to me.
It was being hosted on another continent. It was written in PHP. It was rendering server-side with just some light JS on my end.
That used to be the norm.
It's really hard to fight the trend especially in larger orgs.
The Cloud to make single-digit-seconds operations on a local Raspberry Pi 2 and home Internet take a few minutes.
What a time to be alive.
Planned obsolescence is some of it, some of it is abstractions making it easier for more people to make software (at the cost of using significantly more compute) and Moore’s law being able to support those abstraction layers. Just imagine if every piece of software had to be written in C, the world would look a whole lot different.
I also think we’ve gone a bit too far into abstraction land, but hey, that’s where we are and it’s unlikely we are going back.
Turing completeness is almost an unrelated concept in all of this if you ask me, and if anything it’s because of completeness that has driven higher and higher memory and compute requirements.
I don’t know if that’s a good or realistic rule for most projects, but I imagine for performant types of applications, that’s exactly what it takes to prevent eventual slowdown.
Don't listen to the opinions of the developers writing this code. Listen to the opinions of the people making these tech stack decisions.
Everything else is a distant second, which is why you get shitty performance, developers who cannot measure things. It also explains why when you ask the developers about any of this you get bizarre cognitive complexity for answers. The developers, in most cases, know what they need to do to be hired and cannot work outside those lanes and yet simultaneously have an awareness of various limitations of what they release. They know the result is slow, likely has accessibility problems, and scales poorly, and so on but their primary concern is retaining employment.
Todays version is: "You will get fired unless you use React".
So every site now uses React no matter if the end result is a dog slow Github.
Bad developers looks at "what are everybody else using?".
Good developers looks at "what is the best and simplest (KISS) tool for this?"
https://chromewebstore.google.com/detail/make-github-great-a...
Gitlab is anything but light, by default tends to be slow, but surprisingly fast with a good server ( nothing crazy, but big ) and caching.
Unrealistic timelines, implementing what should be backend logic in frontend, there's a bunch of ways SPA's tend to be a trap. Was react a bad idea? Can anyone point to a single well made react app?
A very simple example: migrating from JavaEE to JakartaEE. Every single Java source file has to have the imports changed from "javax." to "jakarta.", which can easily be thousands of files. It's also easy to review (and any file which missed that change will fail when compiling on the CI).
I don’t think the culprit apps would have substantially better UX if they were rendered on the server, because these issues tend to be a consequence of devs being pressured to rapidly release new features without regard to quality.
I know some people feel like Apple is aggressive in this respect, but that's an 8 year old version of a browser. That's like taking off all of the locks on your house, leaving the doors and windows open all while expecting your house to never have uninvited guests.
A computer will be able to tell that the 497th has a misspelled `CusomerEmail` or that change 829 is a regexp failure that trimmed the boolean "CustomerEmailAddressed" to "CustomerEmailed" with 100% reliability; humans, not so much.
I've definitely managed to make a page that uses almost no JavaScript and is dog-slow on Firefox (until Mozilla updated the rendering engine) just by building a table out of flexboxes. There's plenty of places for browsers to chug and die in the increasingly-complicated standard they adhere to.
The short answer is: no, they don't. Google Cloud relied upon some Googlers happening to be Firefox users. We definitely didn't have a "machine farm" of computers running relevant OS and browser versions to test the UI against (that exists in Google for some teams and some projects, but it's not an "every project must have one" kind of resource). When a major performance regression was introduced (in Firefox only) in a UI my team was responsible for once, we had a ticket filed that was about as low-priority as you can file a ticket. The solution? Mozilla patched their rendering engine two minor versions later and the problem went away.
I put more than zero effort into fixing it, but tl;dr I had to chase the problem all the way to debugging the browser rendering engine itself via a build-from-source, and since nobody had set one of those up for the team and it was the first time I was doing it myself, I didn't get very far; Google's own in-house security got in the way of installing the relevant components to make it happen, I had to understand how to build Firefox from source in the first place, my personal machine was slow for the task (most of Google's builds are farm-based; compilation happens on servers and is cached, not on local machines).
I simply ran out of time; Mozilla fixed the issue before I could. And, absolutely, I don't expect it would have been promotion-notable that I'd pursued the issue (especially since the solution of "procrastinating until the other company fixes it" would have cost the company 0 eng-hours).
I can't speak for GitHub / Microsoft, but Google nominally supports the N (I think N=2) most recent browser versions for Safari, Edge, Chrome, Firefox, but "supports" can, indeed, mean "if Firefox pushes a change that breaks our UI... Well, you've got three other browsers you could use instead. At least." And of course, yes, issues with Chrome performance end up high priority because they interfere with the average in-house developer experience.
The usual response is something like "if you're correct, wouldn't that mean there are hundreds of cases where this needs to be fixed to resolve this bug?". The answer obviously being yes. Incoming 100+ file PR to resolve this issue. I have no other ideas for how someone is supposed to resolve an issue in this scenario
Too bad Phabricator is maintenance-only now https://en.m.wikipedia.org/wiki/Phabricator
Sure 1000+ changes kills the soul, we're not good at that, but sometimes there's just no other decent choice.
Or that you had to avoid Ctrl+F "CustomerEmail" and see whether you had 1000 matches that matches the number of changed files or only 999 due to some typo.
Or using the web interface to filter by file type to batch your reviews.
Or...
Just that in none of those cases there is anything close to our memory/attention capacity.
Gitea is an example I like because it stores the repository as a bare repository, the same as if I did git clone --bare. I bring it up because when I stopped running Gitea, I could easily go in to the data and backup all the repositories an easily reuse them somewhere else.
As an aside, I was an employee around then and I vividly remember that the next half there was a topline goal to improve web speed. Hmmmm, I wonder what could have happened?
I actually have been trying to figure out how to get my React application (unreleased) to perform less laggy in Safari than it does in Firefox/Chrome, and it seems like it is related to all the damn DOM elements. This sucks. Virtualizing viewports adds loads of complexity and breaks some built-in browser features, so I generally prefer not to do it. But, at least in my case, Safari seems to struggle with doing certain layout operations with a shit load of elements more than Chrome and Firefox do.
On react, it's funny that sites where the frontend part is really crucial tend to move away from generic frameworks and do really custom stuff to optimize. I'm thinking about Notion, or Google Sheets, or Figma, where the web interface is everything and pretty early on they just bypass the frontend stacks generally used by the industry.
On random site, Navigate to GitHub repo, navigate to file in repo, and hit back, and I'm on the random site, hit forward and I'm on the file.
So annoying.
One of a large handful of issues I've encountered post react conversion
I'd put it on the end user for not updating software on 15 y/o hardware and still expecting the outside world to interact cleanly.
By all means. It sometimes feels like React is more the symptom than the actual issue, though.
Personally I generally just like having less code; generally makes for fewer footguns. But that's an incredibly hard sell in general (and of course not the entire story).
I would rather just see the steps you ran to generate the diff and review that instead.
I have an ever growing directory listing using SolidJS, and it's up to about 25,000 items. Safari macOS and iOS two major versions ago actually handled it well. After the last major update, my phone rendered it faster than an m1 MacBook Pro.
The main problem is that it tries to do away with a view model layer so you can get the data and render it directly in the components, but that makes managing multiple components from a high level perspective literally impossible. Instead of one view model, you end up with 50 React-esque utilities to achieve the same result.
As you point out it's wildly successful and is the backbone of many groups internal communication. Many companies would just stop working without Slack, that's a testament to the current team's efforts, but also something that critical would merit better perfs.
I'd make the comparison with Figma, which went the extra mile to bring a level of smoothness that just wouldn't be there otherwise.
That's probably true.
> 15 y/0
It's a matter of expectations, many laptops that old still work decently enough with a refreshed battery. Funnily enough win10 was released 15 ago, and one can still get support for it for at least another 3 years until 2028, even on the customer license.
Which, it seems, was a result of the M$ acquisition: https://muan.co/posts/javascript
So GitHub is usable but there are a number of UI layout issues and searching within a file is sometimes a mess (eg, highlighting the wrong text, rendering text incorrectly, etc. maybe that's true for all browsers. you're better off viewing a file as text in raw mode)
Perhaps it depends what software one is using
For example, commandline search and tarball/zipball retrieval from the website, e.g., github.com, raw.githubusercontent.com and codeload.github.com, are not slow for me, certainly not any slower than Gitlab
I do not use a browser nor do I use the git software
For instance, the GP could be a proponent of self-management, and the statement would be coherent (an indictment of leaders within capitalism) without supposing anything about communism.
I work in a large C++ codebase and a rename like that will actually just crash my vscode instance straight-up.
(There are good automated tools that make it straightforward to script up a repository-wide mutation like this however. But they still generate PRs that require human review; in the case of the one I used, it'd break the PR up into tranches of 50-ish files per tranche and then hunt down individuals with authority to review the root directory of the tranche and assign it to them. Quite useful!)
Depending on where you live (or what websites you visit) it's not unreasonable.
You really can't escape the enshittification.
Are you under the impression that the placeholder skeletons are there and slow because of React? How would a UI written in C++ get the data quicker from the back end to replace the skeleton with?
https://we.phorge.it/phame/post/view/8/anonymous_cloning_dis... https://we.phorge.it/phame/post/view/9/anonymous_cloning_has...
But CSS has bit me with heavy pages (causing a few seconds of lag that even devtools debugging/logging didn't point towards). We know wildcard selectors can impact performance, but in my case there were many open ended selectors like `:not(.what) .ever` where the `:not()` not being attached to anything made it act like a wildcard with conditions. Using `:has()` will do the same with additional overhead. Safari was the worst at handling large pages and these types of selectors and I noticed more sluggishness 2-3 years ago.
Github's code view page has been unreasonably slow for the last several years ever since they migrated away from Rails for no apparent reason.
In case you're one of today's lucky 10,000, OpenCore Legacy Patcher supports Macs going to back as far as 2007: https://github.com/dortania/OpenCore-Legacy-Patcher
Normally, you be able to debug selector matching performance (and in general, see how much style computation costs you), so it's a bit weird if you have phantom multi-second delays.
Old GitHub was very light on features, whereas the new UIs are way more curated on the surface.
Unfortunately all of this brings in tons of complexity. It doesn't help that there are a lot of junior developers working on it, clearly.
React can have all the niceties and optimization in the world, but that fails when its users insist on using it incorrectly, building huge tangled messy components and then wondering why a click takes 1.3 seconds to deliver feedback.
That’s one of my favorites. The exact bug they described during React launch presentation, that React was supposed to help fix with the unidirectional dataflow. You know the one where unread message badges were showing up inconsistently in the UI in different places. They never managed to fix that bug in the 10 years since React was announced and I eventually left Facebook for good.
I use the Github website as I would any software mirror/repository
I'm not interested in images (mascots or other garbage) or executing code (gratuitous Javascript) when using the Github website, I'm interested in reading and downloading source code
Svelte is ok. It could have been great but the api for their version of observables is a disaster (which I hope they eventually fix). Sveltekit is half baked and convoluted and I strongly advise not touching it.
VDOM is also a good idea that simplifies the mental model tremendously. Of course these days we can do better than a VDOM. Svelte in fact doesn't use a VDOM. You can say that VDOM is a terrible idea in comparison with Svelte, but that's just anachronistic.
This is such a tired trope.
But I guess the problem is that every single development position has been converging into this.
The only times in my career as a developer where I was 100% happy was when there was no career PM. Sales, customers, end-users, an engineering manager, another manager, a business owner, a random employee, some rando right out the street... All of those were way better product owners than career PMs in my 25 years of experience.
This is not exactly about competence of the category, it's just about what fits and doesn't. Software development ONLY work when there is a balance of power. PMs have leverage that developers rarely have.
I come from Electrical Engineering. Engineering requires responsibility, but responsibility requires the ability to say "no". PMs, when part of a multi-disciplinary team, make this borderline impossible, and make "being an engineer" synonymous with putting a target on your back.
At any rate your point doesn't make any sense. The same point indicts all leaders, it has nothing to do with capitalism. It's like saying something indicts a specific race of people when it applies to all people equally.
IMO it's the MAIN thing to understand about React—how it renders.
Regardless, now I'm the one with egg on my face since the new compiler promises to eventually remove the need for manual memoization almost entirely. The "almost" still fills me with fear
GitLab: https://docs.gitlab.com/administration/gitaly/praefect/
GitHub: https://github.blog/engineering/infrastructure/stretching-sp...
Slack puts a nicer shade of lipstick on the pig than Teams does, but the lips still belong to the same thing.
Which, unfortunately, cannot be measured :( so no KPIs. Darn!
Its all fun and games until you cut quality over and over so much your customers just leave. Ask Chrysler or GE. I mean they must have saved, what, billions across decades? And for free!
Well... um... not free actually, because those companies have been run into the ground, dragged through hell, revived, and then damned again.
GitHub is big software, but not that big. Huge monorepos and big big diffs grind GitHub to a pulp.
The reality is both can be slow, it depends on your data access patterns, network usage, and architecture.
But the other reality is that SPAs and REST APIs just usually have less optimal network usage and much worse data access patterns than traditional DB connected SSR monoliths. Same goes for micro service.
Like, you could design a highly scalable and optimal SPA. Who's doing it? Almost nobody.
No, instead they're making basically one endpoint per DB table, recreating SQL queries in client side memory, duplicating complex business logic on the front and back end, and sending 50 requests to load an dashboard.
Of course some languages... PHP... aren't so lucky. $customer->cusomerEmail? Good luck dealing with that critical in production, fuckheads!
Back in the day (I was a junior dev) this was easier than grappling with React hooks today:
BOOL CMainDialog::OnInitDialog()
{
CDialogEx::OnInitDialog();
m_pPropertySheet = new CMyPropertySheet(_T("My Tabbed Dialog"), this);
m_pPropertySheet->Create(this, WS_CHILD | WS_VISIBLE, WS_EX_CONTROLPARENT);
CRect rectMainDialog;
GetClientRect(&rectMainDialog);
CRect rectPropertySheet(10, 10, rectMainDialog.Width() - 20, rectMainDialog.Height() - 20);
m_pPropertySheet->MoveWindow(rectPropertySheet);
return TRUE;
}
Good ol’ SSR - but eventually users and PMs start requesting features that can only be implemented with an SPA system, and I (begrudgingly) accept their arguments.
In my role (of many) as technical architect for my org, and as an act of resistance (and possibly to intentionally sabotage LLMs taking over), I opted for hybrid SSR + Svelte - it’s working well for us.
I had to alter basically every aspect of how I interact with it because of how fucking slow it is! I still can't shake the sense that it's about to go down or that I've done something wrong every time I click something and nothing happens for several seconds.
Its these professional PM's that have done nothing else other than project mangement or PMP that don't have an understanding of the long term dev. cost of features that cause these systemic issues.
Now you CAN so it so that is not the case, but tbh i have never seen that in the wild -
There are of course performant react apps out there. What Steve did with tldraw is amazing.
However, the vast majority of the apps out there are garbage since the framework itself is terribly inefficient.
I absolutely should. I hate how many applications have a UI that won't let me copy-paste an error message to search for, much less a menu item; who could possibly have thought that was a good idea?
Regardless of how, the fact remains that the previous implementation of their UI did fetch and render the data from the backend significantly faster than the current React-based one does.
I'm still a big believer in "separation of powers" a la Scrum.
There should be a "Product Owner" that can be anyone really, and in the other side there is a self-managed development team that doesn't include this participant. This gives the team leverage to do things their way and act as a real engineering team.
The reason scrum was killed is because of PMs trying to get themselves into those teams and hijacking the process. Developers hated "PM-based scrum", which is not really scrum at all.
Edit: here's a good investigation on a real-enough app https://www.developerway.com/posts/tailwind-vs-linaria-perfo...
In this very thread there's some asshole using the word "memoization" when "caching" would have been fine.
Even other frameworks like Vue.js, Solid or Svelte don't really suffer from it as much. It simply happens a couple order of magnitudes more often in React than any other framework.
> Writing on the internet can be a two-way thing, a learning experience guided by iteration and feedback. I’ve learned some bad habits from Hacker News. I added Caveats sections to articles to make sure that nobody would take my points too broadly. I edited away asides and comments that were fun but would make articles less focused. I came to expect pedantic, judgmental feedback on everything I wrote, regardless of what it was.
https://macwright.com/2022/09/15/hacker-news
Which is true. Pedantism is the lowest form of pseudo-intelligence.
My irc client is taking 60MiB of memory and 0.01% cpu. My IRC client is responsive and faster, it has more configurable notification settings. I like the irc client more.
> Bandcamp
I just went to the bandcamp page and it indeed loaded very quickly. As far as I can tell, there's no react in use anywhere so I guess that's why.
What do you mean by bandcamp using react?
That test will be disabled for being flaky in under a week because the CI runners have contention with other jobs, causing them to randomly be slower and flake, and the frontend team does not want to waste time investigating flakes.
"Just have dedicated runners with guaranteed CPU performance", but that's the CI platform team's issue, the frontend and testing teams can't fix it, and the CI infra team won't prioritize it for a minimum of 5 years.
My memory is fuzzy but I think it was on phab that I discovered and loved to use stacked merges. This is where you have a merge request into another open merge request etc. Super useful. Miss that in the git world.
IMO "Knowing enough to do damage" is the worst possible situation.
A regular user who's a domain expert is 100x a better PO.
Firefox doesn't work on Windows 7 anymore but installing Firefox is still a hell of a lot better than sticking to IE.
It's possible I'm wrong about bandcamp using react but your guess is far from reality as well – react itself does not prevent or discourage loading pages very quickly.
Well, that's a valid framework too, but by the practical standard of goodness – the best of trash is actually good — because you don't judge goodness against some abstract ideals, but against available choices. Both are valid frameworks, but only one is useful in practice.
We had/have a similar problem where things began with "a sprinkle of js here/there" and then over time those islands became much bigger and encompassed more and more functionality. Entire backend templates were ported to the JS framework and then the page with load and then stuff would pop in after the DOMReady event was fired and the JS booted.
I've been working backwards to remove many of these changes and handle them server side if possible or at least give a better UX while the frontend is getting ready. It's not easy!
In a perfect world, we could run the output of the PHP backend through a JS SSR endpoint and hydrate the few necessary components into full HTML, but unfortunately, many of today's JS SSR tools are only available if you use the meta framework as well.
What's going to be fun over the next year is finally deciding if we should go "all-in" on a JS frontend (using Inertia.js for the communication with the backend) or go back to PHP entirely and try to leverage more browser capabilities. There's not really a right/wrong answer but if marketing want's to keep adding flashy features, having the flexibility of JS would be handy.
Pushes and pulls would still kinda work, actions not so much (but that's cause it needed to transfer more then 100MB)
It’s best in class because everything else is worse. It’s a sad state of affairs.
You can’t just lay this bear trap of an opportunity and expect me to not pedantically state that the word is either “pedantry”, the activity performed by pedants, or “pedantic”, to describe such activities.
“Pedantism” would be a philosophy or viewpoint that extols pedantry. Pedantism would be to pedantry as deontology is to rule-following, a justification of an activity. As such, pedantism would be a slightly higher form of pseudo-intelligence than mere pedantry.
But only slightly.
Meanwhile, I opened a 100K line CSV in Neovim and while it took a couple of seconds to open and render highlighting, after that, it was fine.
A fun part of a retro at my company last year was me explaining to a team, “had all of your pods’ requests succeeded, the DB would have been pushing out well over 200 Gbps, which is generally reserved for top-of-rack switches.” Of course, someone else then had to translate that into “4K Blu-Rays per second,” because web devs aren’t typically familiar with networking, racks, data centers…
It is. Unemployment was virtually non-existant in the ussr, and healthcare was not connected to employment status. So a worker there knew that saying no to their boss was not going to be a life-or-death decision. They might of course be less wealthy and so on but the worst case didn't look as bad.
If your town didn't meet the farming quota they would starve your entire town.
If you went on strike you would get murdered and sometimes your family would get murdered.
If you deserted from the army or retreated you would get shot by barrier troops.
If you were injured or sick you would be disposed of or hidden on an island.
If you were a female orphan under the age of 15 there was something like an 88% chance you'd be used as a prostitute.
The USSR was terrible for workers. Some of this was hidden by lying about statistics, same as it is today with authoritarian countries.
I’m not a frontend dev, and have next to zero experience with anything beyond jQuery, but an analogy is shell. Bash (and zsh, though I find some of its syntactic sugar nicer, albeit still inscrutable) will happily let you do extremely stupid things, but it also lets you do extremely complicated things in a very concise manner. That doesn’t mean it’s inherently bad, it means you need to know what the hell you’re doing, and use linters, write tests, etc.
> if they were forced to use slow machines, they would not be able to put out crap like that
Lots of developers are rather obsessed with writing good, performant code. The problem is rather that many project managers do not let them do these insane optimizations because they take time.
The only things that forcing developers to use slow machines will bring is developers quitting (and quite a lot of them would actually love to see the person responsible for this decision dead (I'm not joking) because he made the developers' job a hell on earth).
What you should rather do if you want performant software is to fire all the project managers who don't give the developers the necessary time (or don't encourage the developers) to write highly optimized code (i.e. those idiot project managers who argue with "pragmatism" concerning this point).
Maybe it will make a significant enough cumulative impact 5 years later that it can actuallly be noticed and defended in a meeting against other priorities.
But I’ve never heard of anyone hiring someone on minimum wage and deferring a huge bonus to 5 years later.
Even if it does makes a big impact, would anyone even take a such a job?
No they don't. It's literally just a skill issue.
To give just one simple example: to get the textbook complexity bound for the Dijkstra algorithm, you need some fancy mergeable heap data structures which are much more complicated, and thus time-intense to implement than the naive implementation.
Or you can get insane low-level optimizations by using the SIMD instructions that modern processors provide. Unluckily, this takes a lot of time and leads to code that is not easy to understand (and thus not easy to write) for "classically trained" programmers.
Yes, you indeed need a lot of skills to write such very fast algorithms, but even for such ultra-smart programmers, finding and applying such optimizations need a lot of development time, which is why this is often only done for code parts that are insanely computation-intense and performance-critical such as video (and sometimes audio) codecs.
If github has a million users visiting it per day on a FRESH cache, and all of them have to download at least 10 megabytes of text data (both of these numbers are far too high), you are at ... 0.015 "4k blurays per second". Yeah I think MS's datacenters will survive.
Sourcehut is basically a really barebones web interface for git server, so I don't think it's really comparable to GitHub
For hosting your own projects that's sometimes not a viable solution either. Limiting your open source project to platform other than GitHub hurts it's discoverability, because usually GitHub is what most devs and non devs associate with open source. I heard a lot of "It's not open source if it's not on GitHub". You can mirror your project to GH of course
"Just migrate to X because it's faster" doesn't work that well in the real world
https://alexsidorenko.com/blog/react-render-always-rerenders
It's a set of 6 short and sweet posts that breaks down rendering behavior, memoization, and relevant hooks
Any time I click a GitHub link, if I navigate beyond the readme, then my history is completely borked. Going “back” one page might go to the readme, might go back to HN, or might even go back to the readme and then back to the page I was trying to leave!
It’s infuriating and I always figured it was a bug they’d fix eventually but it’s been at least two years of this crap.
The point is moreso that PHP won't stop you from doing that. It will run, and it will continue running, and then it will throw an error at some point. Maybe.
If the code is actually executed. If it's in a branch that's only executed like 1/1000 times... not good.
The servers at https://codeload.github.com and https://raw.githubusercontent.com are two examples
Each redirects to https://github.com
> Which, unfortunately, cannot be measured
This is such a subtle but important thing that so many people do not understand about data analysis. It's even at the heart of things like survivor bias[0]. Your measure is always a proxy and this proxy has varying degrees of alignment with what you want to measure.I know everyone knows the cliche "The devil is in the details" but everyone seems to continually make these mistakes because nuance is hard. But then again what is a cliche if not words of wisdom that everyone can recite but fail to follow?
> Its all fun and games until you cut quality over and over so much your customers just leave.
The alternative is you develop a Lemon Market. Which is a terrible situation for all parties involved. Short term profits might be up but these are at the loss of much higher long term rewards.[0] You infer where the downed planes were shot through the measures you can make on recovered planes. But that is very different than measuring where downed planes were shot. You can't just take the inverse of the returned planes and know where to add plating from there.