←back to thread

124 points edent | 1 comments | | HN request time: 0.213s | source
Show context
mike_hearn ◴[] No.42726661[source]
tl;dr same reason other services go offline at night: concurrency is hard and many computations aren't thread safe, so need to run serially against stable snapshots of the data. If you don't have a database that can provide that efficiently you have no choice but to stop the flow of inbound transactions entirely.

Sounds like Dafydd did the right thing in pushing them to deliver some value now and not try to rebuild everything right away. A common mistake I've seen some people make is assuming that overnight batch jobs that have to shut down the service are some side effect of using mainframes, and any new system that uses newer tech won't have that problem.

In reality getting rid of those kinds of batch jobs is often a hard engineering project that requires a redesign of the algorithms or changes to business processes. A classic example is in banking where the ordering of these jobs can change real world outcomes (e.g. are interest payments made first and then cheques processed, or vice-versa?).

In other cases it's often easier for users to understand a system that shuts down overnight. If the rule is "things submitted by 9pm will be processed by the next day" then it's easy to explain. If the rule is "you can submit at any time and it might be processed by the next day", depending on whether or not it happens to intersect the snapshot taken at the start of that particular batch job, then that can be more frustrating than helpful.

Sometimes the jobs are batch just because of mainframe limitations and not for any other reason, those can be made incremental more easily if you can get off the mainframe platform to begin with. But that requires rewriting huge amounts of code, hence the popularity of emulators and code transpilers.

replies(3): >>42726889 #>>42726950 #>>42735550 #
abigail95 ◴[] No.42726889[source]
Do you know why the downtime window hasn't been decreasing over time as it gets deployed onto faster hardware over the years?

Nobody would care or notice if this thing had 99.5% availability and went read only for a few minutes per day.

replies(4): >>42727036 #>>42727102 #>>42733233 #>>42736529 #
1. roryirvine ◴[] No.42736529[source]
Most likely because it's not just a single batch job, but a whole series which have been scheduled based on a rough estimate of how long the jobs around them will take.

For example, imagine it's 1997 and you're creating a job which produces a summary report based on the number of total number of cars registered, grouped by manufacturer and model.

Licensed car dealers can submit updates to the list of available models by uploading an EDIFACT file using FTP or AS1. Those uploads are processed nightly by a job which runs at 0247. You check the logs for the past year, and find that this usually takes less than 5 minutes to run, but has on two occasions taken closer to 20 minutes.

Since you want to have the updated list of models available before you run your summary job, you therefore schedule it to run at 0312 - leaving a gap of 25 minutes just in case. You document your reasoning as a comment in the production control file used to schedule this sequence of jobs.

Ten years later, and manufacturers can now upload using SFTP or AS2, and you start thinking about ditching EDIFACT altogether and providing a SOAP interface instead. In another ten years you switch off the FTP facility, but still accept EDIFACT uploads via AS2 as a courtesy to the one dealership that still does that.

Another eight years have passed. The job which ingests the updated model data is now a no-op and reliably runs in less than a millisecond every night. But your summary report is still scheduled for 0312.

And there might well be tens of thousands of jobs, each with hundreds of dependencies. Altering that schedule is going to be a major piece of work in itself.