←back to thread

Type checking is a symptom, not a solution

(programmingsimplicity.substack.com)
67 points mpweiher | 4 comments | | HN request time: 0.447s | source
1. kelnos ◴[] No.45143484[source]
I don't really get what the author is trying to say here; it sounds like they don't really know what they're talking about.

They talk about how electronics engineers use "isolation, explicit interfaces, and time-aware design" to solve these problems as if that's somehow different than current software development practices. Isolation and explicit interfaces are types. There's no strict analogue to "time-aware design" because software depends on timing in very different ways than hardware does.

Electronics engineers use a raft of verification tools that rely on what in software we'd call "types". EEs would actually love it if the various HDLs they use had stronger typing.

Where they really lost me is in the idea that UNIX pipelines and Docker are examples of their concept done right. UNIX pipelines are brittle! One small change in the text output of a program will completely break any pipelines using it. And for the person developing the tool, testing that the output format hasn't changed is a tedious job that requires writing lots of string-parsing code. Typed program output would make this a lot easier.

And Docker... ugh, Docker. Docker has increasingly been used for isolation and "security", but the original purpose of Docker was reproducible environments, because developers were having trouble getting code to run the same way between their development machines and production (and between variations of environments in production). The isolation/security properties of Docker are bolted on, and it shows.

replies(3): >>45144089 #>>45144966 #>>45146272 #
2. mcswell ◴[] No.45144089[source]
"UNIX pipelines": Yes, this is where I stopped reading. The author claims that

> UNIX pipelines routinely compose dozens of programs into complex

> workflows, yet they require no type checking at the transport

> layer. The individual programs trust that data flowing between

> them consists of simple, agreed-upon formats—usually lines of

> text separated by newlines.

The transport layer is not the relevant layer; it's the programs in between that matter. If you're just looking for words or characters or something, that's straightforward enough--but the equivalent in programming is passing strings from one function to another. You run into problems when program 1 in the pipeline uses tab delimiters, whereas program 2 was expecting space characters or commas or something, and program 3 is expecting tokens in the pipe to be in a different order, or program 4 doesn't know what to do when some records have three tokens and others have four or two or something.

3. mikestorrent ◴[] No.45144966[source]
> Where they really lost me is in the idea that UNIX pipelines and Docker are examples of their concept done right. UNIX pipelines are brittle!

Yes. String-parse the output of a command... hope it's not different on MacOS, or add special conditions to it... it's all very tedious. AI seems to do this stuff well due to the sheer number of examples people have produced, thank god, but I put in my decades of writing it by hand. The pipe is great, but Bash sucks as a language for many reasons, lack of types pretty low on the list but still on there somewhere.

Powershell seems to have fixed that a bit by passing objects around in the shell. On the unix side more and more utilities just serialize to JSON and we all expect `jq` - somehow not always installed, but still Perl always is...

4. Mikhail_Edoshin ◴[] No.45146272[source]
Rob Pike once said that the idea of programs doing one thing (and connected with pipelines) was "dead and gone and the eulogy was delivered by Perl". A scripting language is much more convenient to use, even if we are to make system calls to old-style programs. And although most such languages are weakly typed, they are still stricter than what gets sent to stdout.