←back to thread

1070 points dondraper36 | 1 comments | | HN request time: 0.235s | source
1. evo ◴[] No.45069654[source]
Another way I like to think about this is finding 'closeable' contexts to work in; that is, abstractions that are compact and logically consistent enough that you can close them out and take them on their external interface without always knowing the inner details. Metaphorically, your system can be a bunch of closed boxes that you can then treat as boxes, rather than a bunch of open boxes whose contents are spilling out and into each other. Think 'shipping containers' instead of longshoremen throwing loose cargo into your boat.

If you can do this regularly, you can keep the _effective_ cognitive size of the system small even as each closed box might be quite complex internally.