Its not very complex and feels like we're running a lot of compute resources to just sync data between systems. Admittedly there isn't good separation of concerns so there is overlap that requires data syncs.
I've been looking at things like kafka, etc. thinking there might be some magic there that makes us use less compute or makes data syncs a little easier to deal with but wonder what scale of data throughput is a tipping point where a service like that is really needed. If it turns out its just a different service but same timeliness of data sync and similar compute resources I struggle with what benefits might be provided.
I'd love for almost like a levels.fyi style site where people could anonymously report things like this for the tech stacks being used, throughput of data, amount of compute in play, and ratings/comments on their overall solution ("would do again", "don't recommend", "overkill", "resume filler"). It feels much like other areas of technology where a use case comes out of a huge company and RDD (resume driven development) takes hold and now there are people out there doing the equivalent of souping up a 1997 honda accord like its a racecar but its only driving grandma to her appointments.