I still remember the behemoth of a commit that was "-60,000 (or similar) lines of code". Best commit I ever pushed.
Those were fun times. Hadn't done anything algorithmically impressive since.
I still remember the behemoth of a commit that was "-60,000 (or similar) lines of code". Best commit I ever pushed.
Those were fun times. Hadn't done anything algorithmically impressive since.
Is this, from elsewhere in the thread, a system rethink, https://github.com/dotnet/runtime/pull/36715/files ?
I've worked on a product that reinvented parts of the standard library in confusing and unexpected ways, meaning that a lot of the code could easily be compacted 10-50 times in many place, i.e. 20-50 lines could be turned into 1-5 or so. I argued for doing this and deleting a lot of the code base, which didn't take hold before me and every other dev left except one. Nine months after that they had deleted half the code base out of necessity, roughly 2 MLOC to 1 MLOC, because most of it wasn't actually used much by the customers and the lone developer just couldn't manage the mess on his own.
I wouldn't call that a system rethink.
Otherwise just downvote or flag I guess, but this comment of yours just reads as an insult to a person that maybe did not put the most effort into writing their comment, but seems genuine to me at least.
the select-a-bunch-of-code-and-then-zap-it-with-the-Del-key is the best hardware algorithm.
uncatchable, so I won't even try.
Jokes aside, could I get a layman's explanation of the graph theory stuff here? Sounds pretty cool but the terminology escapes me
One way to often arrive at it is to just draw some graphs, on paper/whiteboard, and manually step through examples, pointing with your finger/pen, drawing changes, and sometimes drawing a table. You'll get a better idea of what has to happen, and what the opportunities are.
This sounds "Then draw the rest of the owl," but it can work, once you get immersed.
Then code it up. And when you spot a clever opportunity, and find the right language to document your solution, it can sound like a brilliant insight that you could just pull out of the air, because you are so knowledgeable and smart in general. When you actually had to work through that specific problem, to the point you understood it, like Feynman would want you to.
I think Feynman would tell us to work through problems. And that Feynman would really f-ing hate Leetcode performance art interviews (like he was dismayed when he found students who'd rote-memorize the things to say). Don't let Leetcode asshattery make you think you're "not good at" algorithms.
:)
It’s because every task was doing a database call but they had a whole repo and aws lambdas for running it. Stupidest thing I’ve ever seen.
But a lot is opportunity. Like, I had the opportunity to work on an old PHP backend, 500ms - 1 second response times (thanks in part to it writing everything to a giant XML string which was then parsed and converted to a JSON blob before being sent back over the line). Simply rewriting it in naive / best practices Go changed response times to 10 ms. In hindsight the project was far too big to rewrite on my own and I should have spent six months to a year trying to optimize and refactor it, but, hindsight.
You are starting at a specific node in the graph and saying that if there’s an isomorphism the target tree root node must be equivalent to that specific starting node in the original graph.
You just walk through the original graph following the pattern of the target tree and if something doesn’t match it’s false otherwise true? Am I mistaken here? Again the target being a tree is a bit irrelevant. This will work for any subgraph as long as as you are also given starting point nodes for both the target and the original graph?
The graph that is to be determined as a subset is a tree. From there he says it can be done in an algorithm that only traverses every node at most one time.
I’m assuming he’s also given a starting node in the original graph and the algorithm just traverses both graphs at the same time starting from the given start node in the original graph and the root in the tree to see if they match? Standard DFS or BFS works here.
I may be mistaken. Because I don’t see any other way to do it in one walk through unless you are given a starting node in the original graph but I could be mistaken.
To your other point, The algorithm inherently has to also be statefull. All traversal algorithms for graphs have to have long term state. Simply because if your at a node in a graph and it has like 40 paths to other places you can literally only go down one path at a time and you have to statefully remember that node has another 39 paths that you have to come back to later.
Given two graphs one is a tree you cannot determine if the tree is a subgraph of the other graph in one walk through?
It’s only possible if you’re given additional information? Like a starting node to search from? I’m genuinely confused?
There’s also a role called being an algorithms engineer in standard tech companies (typically for lower level work like networking, embedded systems, graphics, or embedded systems) but the lack of an engineering background may hamstring you there. Engineers working in crypto also use a fair bit of algorithms knowledge.
I do low level work at a top company, and you only use algorithms knowledge on the job a couple of times a year at best.
http://www.nsl.com/papers/samefringe.htm
If you flatten both of your trees/graphs and regard the output as strings of nodes, you reduce your task to a substring search.
Now if you want to verify if the structures and not just the leave nodes are identical, you might be able to encode structure information into you strings.
I was thinking in terms of finding all subgraph isomorphisms. But this definitely is O(N) if all you need is one solution.
But then I thought about it even further and this reduces to sliding window problem. In this case you still need to travel to each node in the window to see if there’s a match.
So it cannot be that you traverse each node once. Not if you want to find all possible subgraph isomorphisms.
Imagine a string that is a fractal of substrings:
rrrrrrrrrrrrrrrrrrrrrrrrrrrr
And the other one: rrrrrrr
Right? The sliding window for rrrrrrr will be 7 in length and you need to traverse that entire window every time you move it. So by that fact alone every node is traversed at least 7 times.Yet, you ask someone "how do you build an efficient LFU" and get blank stares (I just LOVE the memcache solution of regions and probabilistic promotion/demotion).
I oversimplified the problem :). Really it was about generating an isomporhic-ish view, based on some user defined rules, of an existing graph, itself generated by a subgraph isomorphism by a query language.
Think a computer network as a graph, with various other configuration items like processes, attached drives, etc (something also known as a CMDB). Query that graph to generate a subgraph out of it. Then use rules to make that subgraph appear as a tree of layers (tree but in each layer you may have additional edges between the vertices) because trees are efficient, non-complex representation on 2d space (i.e. monitors).
However, a child node in that tree isn't necessarily connected directly to the parent node. E.g. one of the rules may be "display the sub network and the attached drives in a single layer", so now the parent node, the gateway, has both network nodes (directly connected to it) and attached drives (indirectly connected to it) as direct descendants.
Extend this to be able to connect through any descendant, direct or indirect (gateway -> network node -> disk -> config file -> config value - but put the config value on the level of the network node and build a link between them to represent the compound relationship).
Walk through the original subgraph while evaluating the rules and build a "trace back" stack to let you understand how to build each layer even in the presence of compound links while performing a single walkthrough instead of nm (original vertices rules for generation).
As I said, that was a lot of fun. I miss those days.
These days it's very different, mostly large-ish distributed systems.
It's the difference between hearing a lecture from a "bad" professor in Uni and watching a lecture video by Feynman, where he tries to get rid of scientific terms, when explaining things in simple terms to the public.
As long as you get a definition for your terms, things are manageable.
I heard from someone who was in that field, that the main qualification for such a job is analytical ability and mathematics knowledge, apart from programming skills, of course.
Your example raises some serious red flags. Did it ever dawned upon you that the reason these background tasks were offloaded to a dedicated service might have been to shed this load from your main server and protect it from handling sudden peaks in demand?
These background tasks are all database calls. That means the cpu is just waiting on the database for the majority of the call. Most modern servers can handle 10k of these calls concurrently. And you can do this off of one not so powerful CPU. Even half a cpu can handle this. Of course it depends on the CPU but you get my point.
The database is the bottleneck. The database is the thing that needs to be scaled first before you scale servers. This is the most common web application pattern. One way is providing more compute to the database (sharding is better then increasing cpu power as the bottleneck in the database is usually filesystem access not cpu power). Another way is to have a queue buffer the traffic spikes. Both of these are addressing an issue with the database first.
In most web apps. All the server does is wait for a database. The database is doing compute. You never want the server to do compute as that becomes what we call a “blocking call.” These blocking calls are the ones you offload to an external service as these calls “block” entire cpu threads. database calls do not “block” as the server will context switch to another green thread during database calls.
If you work somewhere where you’re scaling crud servers but not after scaling a central database it usually means you’re in a company that doesn’t get it and overemphasizes on “architecture” over common sense. It’s actually extremely common in lower tier small companies to have not so smart people build things like this that don’t make any sense. They aren’t thinking coherently and I’ve seen tons of people who just miss this common sense notion.
I’ll be Frank. It’s stupid and defies common sense. It’s likely you are doing this? But it’s also extremely commonplace.
no worries.
I've "invented" all sorts of old patents, all sorts of algorithms, including the PID algorithm. I think it helped form a very practically useful "intuition".
But, I've noticed that some people are passionately against this type of self exploration.
it's "you [don't] know" all the way down.