←back to thread

Parse, Don't Validate (2019)

(lexi-lambda.github.io)
389 points melse | 1 comments | | HN request time: 0s | source
Show context
seanwilson ◴[] No.27640953[source]
From the Twitter link:

> IME, people in dynamic languages almost never program this way, though—they prefer to use validation and some form of shotgun parsing. My guess as to why? Writing that kind of code in dynamically-typed languages is often a lot more boilerplate than it is in statically-typed ones!

I feel that once you've got experience working in (usually functional) programming languages with strong static type checking, flakey dynamic code that relies on runtime checks and just being careful to avoid runtime errors makes your skin crawl, and you'll intuitively gravitate towards designs that takes advantage of strong static type checks.

When all you know is dynamic languages, the design guidance you get from strong static type checking is lost so there's more bad design paths you can go down. Patching up flakey code with ad-hoc runtime checks and debugging runtime errors becomes the norm because you just don't know any better and the type system isn't going to teach you.

More general advice would be "prefer strong static type checking over runtime checks" as it makes a lot of design and robustness problems go away.

Even if you can't use e.g. Haskell or OCaml in your daily work, a few weeks or just of few days of trying to learn them will open your eyes and make you a better coder elsewhere. Map/filter/reduce, immutable data structures, non-nullable types etc. have been in other languages for over 30 years before these ideas became more mainstream best practices for example (I'm still waiting for pattern matching + algebraic data types).

It's weird how long it's taking for people to rediscover why strong static types were a good idea.

replies(10): >>27641187 #>>27641516 #>>27641651 #>>27641837 #>>27641858 #>>27641960 #>>27642032 #>>27643060 #>>27644651 #>>27657615 #
ukj ◴[] No.27641651[source]
Every programming paradigm is a good idea if the respective trade-offs are acceptable to you.

For example, one good reason why strong static types are a bad idea... they prevent you from implementing dynamic dispatch.

Routers. You can't have routers.

replies(3): >>27641741 #>>27642043 #>>27642764 #
kortex ◴[] No.27641741[source]
Sure you can. You just need the right amount of indirection and abstraction. I think almost every language has some escape hatch which lets you implement dynamic dispatch.
replies(1): >>27641809 #
ukj ◴[] No.27641809[source]
This is a trivial and obvious implication of Turing completeness. Why do you even bother making the point?

With the right amount of indirection/abstraction you can implement everything in Assembly.

But you don't. Because you like all the heavy lifting the language does for you.

First Class citizens is what we are actually interested in when we talk about programming language paradigm-choices.

https://en.wikipedia.org/wiki/First-class_citizen

replies(3): >>27641950 #>>27641980 #>>27641983 #
dkersten ◴[] No.27641980[source]
> Why do you even bother making the point?

Maybe because you said:

> they prevent you from implementing dynamic dispatch.

and

> Routers. You can't have routers.

Which just isn't true. You can implement dynamic dispatch and you can have routers, but they come at a cost (either of complex code or of giving up compile-time type safety, but in a dynamic language you don't have the latter anyway, so with a static language you can at least choose when you're willing to pay the price).

> First Class citizens is what we are actually interested in when we talk about programming language paradigm-choices.

But that's not what you said in your other comment. You just said you can't have these things, not they're not first class citizens. Besides, some static languages do have first class support for more dynamic features. C++ has tools like std::variant and std::any in its standard library for times you want some more dynamism and are willing to pay the tradeoffs. In Java you have Object. In other static languages, you have other built-in tools.

replies(2): >>27642019 #>>27642836 #
ukj ◴[] No.27642019[source]
Everything comes at a cost of something in computation!

That is what “trade offs” means.

You can have any feature in any language once you undermine the default constraints of your language. You can implement Scala in Brainfuck. Turing completeness guarantees it!

But this is not the sort of discourse we care about in practice.

https://en.wikipedia.org/wiki/Brainfuck

replies(1): >>27642114 #
dkersten ◴[] No.27642114[source]
Yes, and? How is that relevant here?

You said "you can't", kortex said "you can" and then you moved the goal posts to "you can because of turing completeness, but its bad, Why do you even bother making the point?" to which I replied "because its a valid response to you're `you can't`" now you moved them again to "everything comes at a cost" (which... I also said?).

Of course everything comes at a cost and yes, that's what "trade off" means. Dynamic languages come at a cost too (type checking is deferred to run time). So, this time, let me ask you: Why do you even bother making the point?

replies(2): >>27642153 #>>27642305 #
ukj ◴[] No.27642153[source]
Tractability vs possibility.

You don't grok the difference.

You can implement EVERYTHING in Brainfuck. Tractability is the reason you don't.

The goalposts are exactly where I set them. With my first comment.

"Every programming paradigm is a good idea if the respective trade-offs are acceptable to you."

replies(2): >>27642301 #>>27643931 #
bidirectional ◴[] No.27643931[source]
You can't actually implement everything in Brainfuck. You can implement something which is performing an equivalent computation in an abstract, mathematical sense. But there's no way to write Firefox or Windows or Fortnite in Brainfuck. Turing completeness means you can evaluate any computable function of type N -> N (and the many things isomorphic to that), it doesn't give you anything else.
replies(2): >>27644803 #>>27645469 #
1. dkersten ◴[] No.27644803{3}[source]
Just as a Turing machine requires an infinitely sized tape to compute any computable function, so too would brainfuck require an indirect sized tape (or whatever it’s called in BF) to compute any computable function. Since memory is finite, neither of these properties are actually available on real hardware.