Most active commenters
  • ukj(38)
  • dkersten(7)
  • justinpombrio(7)
  • (6)
  • wpietri(5)
  • Jtsummers(5)
  • kortex(4)
  • seanwilson(3)
  • mixedCase(3)
  • detaro(3)

←back to thread

Parse, Don't Validate (2019)

(lexi-lambda.github.io)
389 points melse | 126 comments | | HN request time: 4.799s | source | bottom
1. seanwilson ◴[] No.27640953[source]
From the Twitter link:

> IME, people in dynamic languages almost never program this way, though—they prefer to use validation and some form of shotgun parsing. My guess as to why? Writing that kind of code in dynamically-typed languages is often a lot more boilerplate than it is in statically-typed ones!

I feel that once you've got experience working in (usually functional) programming languages with strong static type checking, flakey dynamic code that relies on runtime checks and just being careful to avoid runtime errors makes your skin crawl, and you'll intuitively gravitate towards designs that takes advantage of strong static type checks.

When all you know is dynamic languages, the design guidance you get from strong static type checking is lost so there's more bad design paths you can go down. Patching up flakey code with ad-hoc runtime checks and debugging runtime errors becomes the norm because you just don't know any better and the type system isn't going to teach you.

More general advice would be "prefer strong static type checking over runtime checks" as it makes a lot of design and robustness problems go away.

Even if you can't use e.g. Haskell or OCaml in your daily work, a few weeks or just of few days of trying to learn them will open your eyes and make you a better coder elsewhere. Map/filter/reduce, immutable data structures, non-nullable types etc. have been in other languages for over 30 years before these ideas became more mainstream best practices for example (I'm still waiting for pattern matching + algebraic data types).

It's weird how long it's taking for people to rediscover why strong static types were a good idea.

replies(10): >>27641187 #>>27641516 #>>27641651 #>>27641837 #>>27641858 #>>27641960 #>>27642032 #>>27643060 #>>27644651 #>>27657615 #
2. benrbray ◴[] No.27641187[source]
Yeah, I remember I used to get frustrated when I had to read code that used map() or even .forEach() extensively, thinking a simple, imperative for loop would suffice. I slowly came to realize that a for loop gives you too much power. It's a hammer. It holds the place of a bug you just haven't written yet. Now I'm the one writing JavaScript like it's Haskell. Although Haskell could learn a thing or two from TypeScript about row polymorphism.
replies(2): >>27641257 #>>27642274 #
3. k__ ◴[] No.27641257[source]
Is using "row polymorphism" the same as using a "structutal type system"?

I never heard about the former.

replies(1): >>27641425 #
4. inbx0 ◴[] No.27641425{3}[source]
Not really, and correct me if I'm wrong but afaik TS doesn't actually do row polymorphism so much as structural subtyping - although the difference between the two is pretty small and you can get pretty close to row polymorphism with structural subtyping + generics.

But even if these were the same thing and we want to be a bit pendantic since this is HN after all, structural type systems often support some kind of subtyping or row polymorphism, but it's not a strict requirement for a type system to be "structural". You could have a structural type system that doesn't allow

  { a: int, b: int } 
to be used where

  { a: int } 
is expected. How practical such a type system would be... I don't know. Flow type checker for JavaScript makes a distinction between "exact" types, i.e. object must have exactly the properties listed and no more, and "inexact" types where such subtyping is allowed.
replies(2): >>27641551 #>>27641893 #
5. steveklabnik ◴[] No.27641516[source]
> It's weird how long it's taking for people to rediscover why strong static types were a good idea.

I don't think it's weird. Most of those languages were not popular in industry for various reasons, and the ones that were (especially in say, the 90s) did not have particularly capable static type systems. The boilerplate/benefit ratio was all off.

The way I describe this dichotomy personally is, I would rather use Ruby than Java 1.5. I would rather use Rust than Ruby. (Java 1.5 is the last version of Java I have significant development experience in, and they've made the type system much more capable since those days.)

6. seanwilson ◴[] No.27641551{4}[source]
> How practical such a type system would be... I don't know. Flow type checker for JavaScript makes a distinction between "exact" types, i.e. object must have exactly the properties listed and no more, and "inexact" types where such subtyping is allowed.

TypeScript doesn't have this check (https://github.com/Microsoft/TypeScript/issues/12936) and I've found it can be really error prone when you're wanting to return a copy of an object with some fields updated. Spot the bug in this example: https://www.typescriptlang.org/play?#code/C4TwDgpgBAKlC8UDeU...

7. ukj ◴[] No.27641651[source]
Every programming paradigm is a good idea if the respective trade-offs are acceptable to you.

For example, one good reason why strong static types are a bad idea... they prevent you from implementing dynamic dispatch.

Routers. You can't have routers.

replies(3): >>27641741 #>>27642043 #>>27642764 #
8. kortex ◴[] No.27641741[source]
Sure you can. You just need the right amount of indirection and abstraction. I think almost every language has some escape hatch which lets you implement dynamic dispatch.
replies(1): >>27641809 #
9. ukj ◴[] No.27641809{3}[source]
This is a trivial and obvious implication of Turing completeness. Why do you even bother making the point?

With the right amount of indirection/abstraction you can implement everything in Assembly.

But you don't. Because you like all the heavy lifting the language does for you.

First Class citizens is what we are actually interested in when we talk about programming language paradigm-choices.

https://en.wikipedia.org/wiki/First-class_citizen

replies(3): >>27641950 #>>27641980 #>>27641983 #
10. chriswarbo ◴[] No.27641837[source]
I have the same feeling after spending a few years with Haskell, StandardML, Agda, Idris, Coq, etc.

One trick I've found very useful is to realise that Maybe (AKA Option) can be though of as "a list with at most one element". Dynamic languages usually have some notion of list/array, which we can use as if it were a Maybe/Option type; e.g. we can follow a 'parse, don't validate' approach by wrapping a "parsed" result in a list, and returning an empty list otherwise. This allows us to use their existing 'map', 'filter', etc. too ;)

(This is explored in more detail, including links to logic programming, in https://link.springer.com/chapter/10.1007%2F3-540-15975-4_33 )

If we want to keep track of useful error messages, I've found Scala's "Try" type to be useful ('Try[T]' is isomorphic to 'Either Throwable T'). Annoyingly, built-in sum type; the closest thing is usually a tagged pair like '[true, myFoo]'/'[false, myException]', which is pretty naff.

replies(2): >>27641883 #>>27647266 #
11. jhgb ◴[] No.27641858[source]
> > IME, people in dynamic languages almost never program this way, though—they prefer to use validation

I wonder how many people the author met.

12. np_tedious ◴[] No.27641883[source]
> a list with at most one element

I've found scala or even LINQ to really hammer down this point, even to those who aren't into FP very much. Doing that map/flatmap makes it click for just about anyone

13. mixedCase ◴[] No.27641893{4}[source]
FWIW, you can achieve row polymorphism in TypeScript, although it's not super intuitive.

  function rowPolymorphic<R extends { a: number }>(record: R): R & { a: string } {
    return {
      ...record,
      a: record.a.toString(),
    }
  }

  const rec = rowPolymorphic({
    a: 123,
    b: "string",
  })
  console.log(rec.b)
14. kortex ◴[] No.27641950{4}[source]
I mean, GP said "you can't have routers" and maybe I'm being dense by interpreting that as "never or almost never," but even with a generous "too hard to be practical," I still don't think it's correct.

And I explicitly said "escape hatch" meaning language feature. You don't need that much indirection to get routers in Haskell, Rust, Go, C, C++... like I fail to see how implementing routers are a barrier in strict type system languages.

Is it easier in python or js? Sure. can't? hardly.

E: here's some vtable dispatch (unless that doesn't count as "dynamic dispatch") in Rust. Looks really straightforward.

https://doc.rust-lang.org/1.8.0/book/trait-objects.html

replies(1): >>27651184 #
15. RHSeeger ◴[] No.27641960[source]
Plenty of people know multiple statically typed and dynamic languages, and multiple functional, imperative, and other languages; and use dynamic languages for some things but not other things. The set of people using dynamic languages isn't just "those that haven't had their eyes opened yet to what static languages can do". Different languages and paradigms make different things easier.

I do believe that, for long lasting, larger projects, static typing tends to make the code easier to maintain as time goes on. But not every project is like that. In fact, not every project uses a single language. Some use statically typed languages for some parts, and dynamically typed for others (this is common in web dev).

replies(3): >>27642262 #>>27642369 #>>27646516 #
16. dkersten ◴[] No.27641980{4}[source]
> Why do you even bother making the point?

Maybe because you said:

> they prevent you from implementing dynamic dispatch.

and

> Routers. You can't have routers.

Which just isn't true. You can implement dynamic dispatch and you can have routers, but they come at a cost (either of complex code or of giving up compile-time type safety, but in a dynamic language you don't have the latter anyway, so with a static language you can at least choose when you're willing to pay the price).

> First Class citizens is what we are actually interested in when we talk about programming language paradigm-choices.

But that's not what you said in your other comment. You just said you can't have these things, not they're not first class citizens. Besides, some static languages do have first class support for more dynamic features. C++ has tools like std::variant and std::any in its standard library for times you want some more dynamism and are willing to pay the tradeoffs. In Java you have Object. In other static languages, you have other built-in tools.

replies(2): >>27642019 #>>27642836 #
17. mixedCase ◴[] No.27641983{4}[source]
Did you mean something other than "dynamic dispatch", or what do you mean by "first class support"?

No offense, but your claim sounds like you're confused to me, but maybe I am the one confused. AFAIK I do dynamic dispatch all the time in strongly typed languages. Can you show an example that a strongly typed language can't accomplish?

replies(1): >>27642516 #
18. ukj ◴[] No.27642019{5}[source]
Everything comes at a cost of something in computation!

That is what “trade offs” means.

You can have any feature in any language once you undermine the default constraints of your language. You can implement Scala in Brainfuck. Turing completeness guarantees it!

But this is not the sort of discourse we care about in practice.

https://en.wikipedia.org/wiki/Brainfuck

replies(1): >>27642114 #
19. archsurface ◴[] No.27642032[source]
"It's weird how long it's taking for people to rediscover why strong static types were a good idea." It sounds like you're projecting your limited experience eg you've dismissed all Ms dev. Many people have been using strong static for decades. The benefits have never been out of sight. Many use dynamic, but many have always used strong static.
20. ImprobableTruth ◴[] No.27642043[source]
Huh? Could you give a specific example? Because e.g. C++ and Rust definitely have dynamic dispatch through their vtable mechanisms.
replies(1): >>27642108 #
21. ukj ◴[] No.27642108{3}[source]
Do you understand the difference between compile time and runtime?

Neither C++ nor Rust give you static type safety AND dynamic dispatch because all of the safety checks for C++ and Rust happen at compile time. Not runtime.

replies(4): >>27642140 #>>27642147 #>>27642150 #>>27642761 #
22. dkersten ◴[] No.27642114{6}[source]
Yes, and? How is that relevant here?

You said "you can't", kortex said "you can" and then you moved the goal posts to "you can because of turing completeness, but its bad, Why do you even bother making the point?" to which I replied "because its a valid response to you're `you can't`" now you moved them again to "everything comes at a cost" (which... I also said?).

Of course everything comes at a cost and yes, that's what "trade off" means. Dynamic languages come at a cost too (type checking is deferred to run time). So, this time, let me ask you: Why do you even bother making the point?

replies(2): >>27642153 #>>27642305 #
23. detaro ◴[] No.27642140{4}[source]
> Neither C++ nor Rust have dynamic dispatch

You appear to be using some other definition of dynamic dispatch than the rest of the software industry...

replies(1): >>27642184 #
24. dkersten ◴[] No.27642147{4}[source]
Dynamic languages do it at runtime too, JUST LIKE rust and C++ do. What's the difference?

C++ and Rust let you have compile-time safety, until you choose to give it up and have runtime checks instead. Dynamic languages only allow the latter. Static languages let you choose, dynamic languages chose the latter for you in all cases. Both can have dynamic dispatch.

Besides, static languages can have compile-time type safe dynamic dispatch, if you constrain the dispatch to compile-time-known types (eg std::variant). You only lose that if you want fully unconstrained dynamism, in which case you defer type checking to runtime. Which is what dynamic languages always have.

So both C++ and Rust DO have dynamic dispatch and the programmer gets to choose what level of the dynamism/safety trade off they want. And yes, these features ARE first class features of the languages.

replies(1): >>27642207 #
25. jsnell ◴[] No.27642150{4}[source]
I think you might need to define what you mean by dynamic dispatch, because it is very clearly something totally different than how the term is commonly understood.
replies(1): >>27643462 #
26. ukj ◴[] No.27642153{7}[source]
Tractability vs possibility.

You don't grok the difference.

You can implement EVERYTHING in Brainfuck. Tractability is the reason you don't.

The goalposts are exactly where I set them. With my first comment.

"Every programming paradigm is a good idea if the respective trade-offs are acceptable to you."

replies(2): >>27642301 #>>27643931 #
27. ukj ◴[] No.27642184{5}[source]
You appear to be conflating compilers with runtimes.

Dynamic dispatch happens at runtime.

C++ and Rust are compile-time tools, not runtimes.

replies(2): >>27642649 #>>27643030 #
28. ukj ◴[] No.27642207{5}[source]
>until you choose to give it up

PRECISELY

You have to give up the safety to get the feature.

So you "want type-safety". Until you don't.

>static languages can have compile-time type safe dynamic dispatch

"Compile-time dynamic dispatch" is an oxymoron. Dynamic dispatch happens at runtime.

replies(1): >>27642351 #
29. seanwilson ◴[] No.27642262[source]
> The set of people using dynamic languages isn't just "those that haven't had their eyes opened yet to what static languages can do".

This was more aimed at people who are new to the idea of parsing over validating. In a strong statically typed language, the type system would naturally guide you to use this approach so if this isn't natural to you then time in other languages would probably be worthwhile.

replies(1): >>27642352 #
30. touisteur ◴[] No.27642274[source]
On the other end I'm endlessly tired of 'too simple' foreach/map iterators. They're OK until you want to do something like different execution on first and/or last element... Give me a way to implement a 'join' pattern over the foreach iterators, or less terse iterators (with 'some' positional information). I think I'm just ranting about the for-of iterator in Ada...
replies(2): >>27642971 #>>27644280 #
31. dkersten ◴[] No.27642301{8}[source]
Its perfectly tractable though. Just because you don't understand it or don't think it is, doesn't make it true.

> "Every programming paradigm is a good idea if the respective trade-offs are acceptable to you."

That's not what we are responding to. Nobody here is arguing over this statement. We are responding to you assertion that static typed compile-time checked languages _prevent_ you from having dynamic dispatch and that you _can't have_ routers because of that. Neither of which are true.

Dynamic languages prevent you from having compile time checks. Does that make them bad? Static languages give you compile time safety, but if you're willing to forego that [1], then you can get the EXACT SAME behavior as dynamic languages give you.

You literally said:

    For example, one good reason why strong static types are a bad idea... they prevent you from implementing dynamic dispatch.

    Routers. You can't have routers.
Nowhere did you say anything about trying to implement it at compile time. Also, if strong static types are a bad idea because you can't maintain them all the time, then dynamic typed languages are a bad idea because you don't get static types ever, its always at runtime.

Just because a hammer can't screw in screws doesn't mean its a bad idea, it just means that you can't use it for all use cases. This is the same. You can use static types and for the few cases where you need runtime dynamism, then you use that. That doesn't make the static types in the rest of your code bad. It just gives you additional tools that dynamic types alone don't have.

[1] to various degrees, its not all or nothing like you seem to be implying, there are levels of middle ground, like std::variant which maintains safety but you need to enumerate all possible types, or std::any which is fully dynamic but you give up compile time checks

replies(2): >>27642314 #>>27642370 #
32. ◴[] No.27642305{7}[source]
33. ◴[] No.27642351{6}[source]
34. RHSeeger ◴[] No.27642352{3}[source]
My misunderstanding then. Your comment came across (to me) as saying that people using dynamic language just don't know any better; that, if they would just learn a static language, they would suddenly understand the error of their ways. That's the thought that I was responding to.
35. wpietri ◴[] No.27642369[source]
For sure. I do most of my work in situations of high volatility of domain and requirements and relatively high risk. (E.g., startups, projects in new areas.)

Static typing really appeals to me on a personal level. I enjoy the process of analysis it requires. I love the notion of eliminating whole classes of bugs. It feels way more tidy. I took Odersky's Scala class for fun and loved it.

But in practice, they're just a bad match for projects where the defining characteristic is unstable ground. They force artificial clarity when the reality is murky. And they impose costs that pay off in the long run, which only matters if the project has a long run. If I'm building something where we don't know where we're going, I'll reach for something like Python or Ruby to start.

This has been brought home to me by doing interviews recently. I have a sample problem that we pair on for an hour or so; there are 4 user stories. It involves a back end and a web front end. People can use any tools they want. My goal isn't to get particular things done; it's to see them at their best.

After doing a couple dozen, I'm seeing a pattern: developers using static tooling (e.g., Java, TypeScript) get circa half as much done as people using dynamic tooling (Python, plain JS). In the time when people in static contexts are still defining interfaces and types, people using dynamic tools are putting useful things on the page. Making a change in the static code often requires multiple tweaks in situations where it's one change in the dynamic code. It makes the extra costs of static tooling really obvious.

That doesn't harm the static-language interviewees, I should underline. The goal is to see how they work. But it was interesting to see that it wasn't just me feeling the extra costs. And those costs are only worth paying when they create payoffs down the road.

replies(4): >>27643070 #>>27643535 #>>27645326 #>>27648955 #
36. ◴[] No.27642370{9}[source]
37. ukj ◴[] No.27642516{5}[source]
>I do dynamic dispatch all the time in strongly typed languages.

I believe semantics is getting in our way of communicating.

You don't do dynamic dispatch ___ALL___ the time. You only do it at runtime. And you only do static type safety at compile time. Those are different times.

You can't have both of those features at the __SAME TME__, therefore you can't have both features ALL the time. They are mutually exclusive.

replies(2): >>27643104 #>>27643160 #
38. dkersten ◴[] No.27642533{10}[source]
Ok, I think I understand what you are trying to say. Instead of telling us how we don’t understand various things, how about next time you define your terms, choose your words more carefully and be clearer with your explanations, so we can actually understand what you’re trying to say.

So let me restate what I think you meant. Maybe I’m wrong.

> a limitation of compile time static types is that they cannot do dynamic dispatch, which is relies on information that isn’t known at compile time.

I think we can all agree on this?

Note that I said it’s a limitation, not that it’s bad. Limitations don’t make something bad in themselves, they only constrain when it is an appropriate tool. You wouldn’t say that a submarine is bad because it can’t go on land? It’s a limitation but that doesn’t make it bad.

Then you said that because of this limitation, “you can’t have routers”.

But you can have routers in static languages like C++, Rust or even Haskell.

You have a choice (trade off) to make:

1. either you give up on some dynamism to keep compile time checks by enumerating the types you can dispatch on (as std::variant does)

2. or you choose to give up static checking (for this part of your code only!) and move it to runtime first full dynamic logic (as std::any does). When giving up compile time safety for one part of your code, you do not give it up for all of your code, because you can runtime validate then on the boundaries going back into statically checked code paths, so the compiler cab assume that they are valid if they get back in (as the runtime checks would reject bad types)

Neither case is “you can’t have routers”, but sure you can’t have fully dynamic routers purely at compile time.

Also, both options are perfectly tractable and both cases are typically first class language features (at least in the static types languages I’ve used). In no case are the “bad” options, despite each option having different limitations.

In a dynamic-types-only language, you don’t get to choose the trade offs at all, you only get “fully dynamic no compile time checking, runtime checks only”.

Note also that in real life, few things are truly fully dynamic. You are always constrained by the operations that are expected to be carried out on the dynamic data and while you might not know at runtime what days you could get, you DO know what operations you expect to run on the data. And you can perform compile time checks around that.

So for a router, so you really need it to be fully dynamic? Or can you live with it being generic (ie it’s a library that does routing and it can support any type, but you constrain it to known types when you actually use it). If so, you have options that maintain type safety: you can use OOP inheritance, you can use enumerated types like std::variant, you can use generics/templates. The library works for any types but the user chooses what types to implement based on the operations that will be performed on the data. Even dynamic types do this, they just defer it to runtime.

Or you can have the router operate on purely dynamic types but the handlers that are routed to are statically typed (eg if in C++ the router uses std::any and the handler registers what types it can accept and the router checks validity before handing the data off).

replies(1): >>27642610 #
39. BoiledCabbage ◴[] No.27642649{6}[source]
You are wrong. C++ supports dynamic dispatch.

Please read about it on Wikipedia

https://en.m.wikipedia.org/wiki/Dynamic_dispatch

And for the future to not litter HN with comments like these, next time 10 different people in thread are all explaining to you why you're mistaken, take a moment to try to listen and think through what they're saying instead of just digging deeper.

Having an open mind to learning something new, not just arguing a point is a great approach to life.

replies(1): >>27642717 #
40. dkersten ◴[] No.27642752{12}[source]
I don’t think any of the replies you got for your first comments were uncharitable at all, they responded to what you wrote.

At that point, you could have corrected us and revised your wording for clarity, but you did not, you dug your heels in, you moved the goal posts, you claimed we don’t understand various things that weren’t even really related to the comment we were responding to and you brought in irrelevant points like Turing completeness. You didn’t “get lynched“ right away, you could have reworded or clarified or asked what we didn’t understand about your statement.

Also, YOU didn't practice the principle of charity!

When people responded to what you wrote, you dug in and claimed we didn't understand compilers or tractability vs possibility and various other things, rather than thinking "maybe they didn't understand my point, I should clarify". So its on you, not us.

I’m still not sure if I understand what you were trying to say, I am assuming that you meant what I wrote when I restated your comment, by piecing all of your different comments together. I’m still not sure if you actually meant static types are bad period (vs being bad at certain things and having certain limitations). And I still don’t agree with “you can’t have routers”.

Anyway, I’m done, have a nice day.

EDIT: I know I said I'm done, but your reply: "parse better", I gave you an out and still you blame everyone else and don't accept that you might have made a mistake. You're so sure that you are right and everyone else is wrong that you don't even entertain the possibility that you might have made a mistake (either in your reasoning or your explanation thereof). You appear to have an ego problem. You should take some time out and reflect on your life a bit.

replies(1): >>27642765 #
41. lexi-lambda ◴[] No.27642761{4}[source]
This is sort of a perplexing perspective to me. It seems tantamount to saying “you can’t predict whether a value will be a string or a number AND have static type safety because the value only exists at runtime, and static type safety only happens at compile-time.” Yes, obviously static typechecking happens at compile-time, but type systems are carefully designed so that the compile-time reasoning says something useful about what actually occurs at runtime—that is, after all, the whole point!

Focusing exclusively on what happens at compile-time is to miss the whole reason static type systems are useful in the first place: they allow compile-time reasoning about runtime behavior. Just as we can use a static type system to make predictions about programs that pass around first-class functions via static dispatch, we can also use them to make predictions about programs that use vtables or other constructions to perform dynamic dispatch. (Note that the difference between those two things isn’t even particularly well-defined; a call to a first-class function passed as an argument is a form of unknown call, and it is arguably a form of dynamic dispatch.)

Lots of statically typed languages provide dynamic dispatch. In fact, essentially all mainstream ones do: C++, Java, C#, Rust, TypeScript, even modern Fortran. None of these implementations require sacrificing static type safety in any way; rather, type systems are designed to ensure such dispatch sites are well-formed in other ways, without restricting their dynamic nature. And this is entirely in line with the OP, as there is no tension whatsoever between the techniques it describes and dynamic dispatch.

replies(1): >>27642990 #
42. justinpombrio ◴[] No.27642764[source]
Are you sure you know what dynamic dispatch is? Java has dynamic dispatch, and it is a statically typed language. In Java, it's often called "runtime polymorphism".

https://www.geeksforgeeks.org/dynamic-method-dispatch-runtim...

And using it doesn't give up any of Java's type safety guarantees. The arguments and return type of the method you call (which will be invoked with dynamic dispatch) are type checked.

replies(2): >>27643722 #>>27650975 #
43. ◴[] No.27642836{5}[source]
44. Jtsummers ◴[] No.27642971{3}[source]
Ada provides a kind of iterator called a Cursor which could be used to build up a package of functions similar to the various C++ standard library algorithms. I believe this has actually already been done. Cursors can also be converted back to positional information if it makes sense (like with a Vector).
replies(1): >>27647234 #
45. ukj ◴[] No.27642990{5}[source]
You must be strawmanning my position to make this comment.

Obviously static type systems are useful. I don't even think my point is contrary to anything you are saying. This is not being said as way of undermining any particular paradigm because computation is universal - the models of computation on the other hand (programming languages) are not all “the same”. There are qualitative differences.

Every single programming paradigm is a self-imposed restriction of some sort. It is precisely this restriction that we deem useful because they prevent us from shooting off our feet with shotguns. And we also prevent ourselves from being able to express certain patterns (of course we can deliberately/explicitly turn off the self-imposed restriction! ).

Like the restriction you are posing on your self is explicit in "type systems are carefully designed so that the compile-time reasoning says something useful about what actually occurs at runtime"

If you could completely determine everything that happens at runtime you wouldn't need exception/error handling!

All software would be 100% deterministic.

And it isn't.

I can say nothing of the structure of random bitstreams from unknown sources. I only know what I EXPECT them to be. Not what they actually are.

In this context parsing untrusted data IS runtime type-checking.

46. detaro ◴[] No.27643030{6}[source]
And the compiler generates the code necessary for dynamic dispatch to happen at runtime.
replies(1): >>27643438 #
47. lukashrb ◴[] No.27643060[source]
For what its worth: People don't use dynamic language because they don't know better or never used a static language. To better understand what dynamic languages bring to the table, here are some disadvantages of static types to consider:

Static types are awesome for local reasoning, but they are not that helpful in the context of the larger system (this already starts at the database, see idempotency mismatch).

Code with static types is sometimes larger and more complex than the problem its trying to solve

They tightly couple data to a type system, which (can) introduce incidental complexity >(I'm still waiting for pattern matching + algebraic data types) This is a good example, if you pattern match to a specific structure (e.g. position of fields in your algebraic data type), you tightly coupled your program to this particular structure. If the structure change, you may have to change all the code which pattern matches this structure.

replies(4): >>27643241 #>>27643284 #>>27646280 #>>27648828 #
48. j1elo ◴[] No.27643070{3}[source]
Great comment. There are no silver bullets. I am Team static typing, but recognize how heavy of a burden would be to start a purely exploratory development in Rust or Java. It just "cuts your wings" in the name of correctness... well some times it is useful to have the ability to start with a technically incorrect implementation that anyways only fails in a corner case that is not your main point of research.

On the other hand, as the initial code grows and grows, the cost of moving it all to a saner language grows too... discouraging a rewrite. So we end up with very complex production software that started as dynamic and is still dynamic.

replies(2): >>27643663 #>>27644924 #
49. Jtsummers ◴[] No.27643104{6}[source]
Dynamic dispatch is literally a runtime feature of a language. That's why it's called "dynamic". GP can absolutely use dynamic dispatch "all the time" in the sense that they use it regularly in their program, perhaps in every or nearly every program they write.

Your statement is verging on the nonsensical, like saying to someone "You don't use integers all the time, sometimes you use strings, they're different things." Well, duh?

EDIT: Also, it's discouraged on this site in general to use all caps for emphasis. *phrase* produces an italicized/emphasized form with phrase.

replies(1): >>27645393 #
50. mixedCase ◴[] No.27643160{6}[source]
"All the time" is an english expression, please, look it up before going all-caps Python name mangle convention on me.

Yes, of course dynamic dispatch is a runtime phenomenom, that's the dynamic part of it. But there's nothing stopping the code that performs dynamic dispatch from being strongly typed. Strong types are instructions used to prove that the code holds certain properties, they are a separate program from the final binary the compiler gives you. Do you also think that code that gets unit tested can't perform dynamic dispatch?

If your point is that "types don't exist at runtime anyway" (reflection aside) then you don't understand what the purpose of a type system is, nor what strongly typed code means.

replies(2): >>27643674 #>>27652144 #
51. tikhonj ◴[] No.27643241[source]
The example with pattern matching doesn't have anything to do with static types. You'll have exactly the same problem if you pattern match against positional arguments in Python:

    match event.get():
        case Click((x, y)):
            handle_click_at(x, y)
(Example from PEP 636[1].)

In both Python and statically typed languages you can avoid this by matching against field names rather than positions, or using some other interface to access data. This is an important design aspect to consider when writing code, but does not have anything to do with dynamic programming. The only difference static typing makes is that when you do change the type in a way that breaks existing patterns, you can know statically rather than needing failing tests or runtime errors.

The same is true for the rest of the things you've mentioned: none are specific to static typing! My experience with a lot of Haskell, Python, JavaScript and other languages is that Haskell code for the same task tends to be shorter and simpler, albeit by relying on a set of higher-level abstractions you have to learn. I don't think much of that would change for a hypothetical dynamically typed variant of Haskell either!

[1]: https://www.python.org/dev/peps/pep-0636/#matching-sequences

replies(1): >>27644451 #
52. giovannibonetti ◴[] No.27643284[source]
When you said "idempotency mismatch" you were meaning impedance mismatch, right?
replies(2): >>27643321 #>>27643452 #
53. tome ◴[] No.27643321{3}[source]
Strange if so because it's the "Object-relational impedance mismatch" not the "Static type-relational impedance mismatch".

https://en.wikipedia.org/wiki/Object%E2%80%93relational_impe...

54. ukj ◴[] No.27643438{7}[source]
But it doesn’t static-type-check that particular code-path.

Because it can’t.

55. lukashrb ◴[] No.27643452{3}[source]
Your are right! Thank you for correcting me.
56. ukj ◴[] No.27643462{5}[source]
Deciding which implementation of a function handles any given piece of data at runtime.

Trivially, because you don’t have this knowledge (and therefore you can’t encode it into your type system) at compile time.

replies(1): >>27644291 #
57. giantrobot ◴[] No.27643535{3}[source]
I have had the exact same experience. There's lots of utility in statically typed languages. They're great if your problem space is well defined. With respect to type checking, it's like a jig in wood or metal working. You trade flexibility for correctness.

When the problem space is less well defined the type-related boiler plate adds a lot of friction. It's not impossible to overcome that friction but it slows down progress. When you're under a tight deadline development velocity is often more valuable than absolute correctness or even overall runtime efficiency.

An delivered product that works is usually more valuable than an undelivered product that's more "correct" or efficient. A development project is just a cost (for various values of cost) until it ships.

replies(2): >>27645855 #>>27646330 #
58. Jtsummers ◴[] No.27643603{8}[source]
C++ is not a compiler. C++ is a language with a specification from which people derive compilers and standard libraries and runtimes.

C++ the language very much does tell you what to expect at runtime, though perhaps not everything you could ever want. I mean, it's not Haskell or Idris with their much richer type systems.

replies(2): >>27643772 #>>27647623 #
59. ◴[] No.27643663{4}[source]
60. ukj ◴[] No.27643674{7}[source]
There is something stopping you from running the proof against your implementation!

Proof-checking happens only at compile time.

The implementation that you want to prove things about is only available at runtime!

Non-availability (incompleteness) of information is what is preventing you…

61. ukj ◴[] No.27643722{3}[source]
Are you sure you know when type-checking and when dynamic dispatching happens?

Compile time is not runtime.

The Java compiler is not the JVM.

The compiler does type checking. The JVM does the dynamic dispatching.

Neither does both.

replies(1): >>27643937 #
62. ukj ◴[] No.27643772{9}[source]
Perfect!

Please produce a piece of code (in a language such as Coq or Agda) which proves whether any given piece of random data has the type “C++ compiler” or “C++ program”.

That is the epitome of static type-checking, right?

63. bidirectional ◴[] No.27643931{8}[source]
You can't actually implement everything in Brainfuck. You can implement something which is performing an equivalent computation in an abstract, mathematical sense. But there's no way to write Firefox or Windows or Fortnite in Brainfuck. Turing completeness means you can evaluate any computable function of type N -> N (and the many things isomorphic to that), it doesn't give you anything else.
replies(2): >>27644803 #>>27645469 #
64. justinpombrio ◴[] No.27643937{4}[source]
All those statements are correct. The people downvoting you know that too. I don't think anyone has figured out what point you're trying to make, though. Could you spell it out in more detail?

Consider addition. The compiler does type checking, and the JVM actually adds the numbers. Nonetheless, the addition is type checked, and does not represent a weakness of static type checking. How is dynamic dispatch different than this?

replies(1): >>27644085 #
65. ukj ◴[] No.27644085{5}[source]
The point is trivial.

You can’t have both static type safety AND dynamic dispatch at the same time and in the same context about the same data.

Choose one. Give up the other. Make a conscious trade off.

The language that you are using is making such trade offs for you - they are implicit in the language design. Best you know what they are because they are meaningful in principle and in practice.

replies(1): >>27644214 #
66. justinpombrio ◴[] No.27644214{6}[source]
Java has both static type safety AND dynamic dispatch at the same time and in the same context about the same data.
replies(1): >>27644270 #
67. ukj ◴[] No.27644270{7}[source]
No, it doesn’t.

The input-data to the compiler can’t be handled by the JVM and vice versa.

The JVM handles byte code as input. The compiler handles source code as input.

That is two different functions with two different data domains.

They literally have different types!

Which one of the two functions is the thing you call “Java”?

replies(2): >>27644442 #>>27644498 #
68. DougBTX ◴[] No.27644280{3}[source]
I quite like the “enumerate” pattern. When indexes matter, instead of `for x in v` you would write, `for (i, x) in enumerate(v)`, then the language only needs one type of for loop as both cases use the same enumerator interface.
replies(1): >>27647194 #
69. justinpombrio ◴[] No.27644291{6}[source]
Aha! I think I have debugged your thinking. Wow you made that hard by arguing so much.

Apparently you do know what dynamic dispatch is, you're just wrong that it can't be type checked.

In Java, say you have an interface called `Foo` with a method `String foo()`, and two classes A and B that implement that method. Then you can write this code (apologies if the syntax isn't quite right, it's been a while since I've written Java):

    Foo foo = null;
    if (random_boolean()) {
        foo = new A();
    } else {
        foo = new B();
    }
    // This uses dynamic dispatch
    System.out.println(foo.foo())
This uses dynamic dispatch, but it is statically type checked. If you change A's `foo()` method to return an integer instead of a String, while still declaring that A implements the Foo interface, you will get a type error, at compile time.
replies(1): >>27644679 #
70. justinpombrio ◴[] No.27644442{8}[source]
A type system is sound when:

    for all expressions e:
      if e type checks with type t, then one of the following holds:
        - e evaluates to a value v of type t; or
        - e does not halt; or
        - e hits an "unavoidable error"
          like division by 0 or null deref
          (what counts as "unavoidable" varies from lang to lang)
Notice anything interesting about this definition? It uses the word "evaluate"! Type soundness is not just a statement about the type checker. It relates type checking to run time (and thus to compilation too). That is, if you muck with Java's runtime or compiler, you can break type soundness, even if you don't change its type system in the slightest.
replies(1): >>27644614 #
71. lukashrb ◴[] No.27644451{3}[source]
You're absolutely right. I guess I mentioned pattern matching in particular because of the cited sentence from OP "I'm still waiting for pattern matching + algebraic data types".

> The same is true for the rest of the things you've mentioned: none are specific to static typing!

Sure, I could be wrong here. I frequently am. But could you point out why do you think that?

replies(1): >>27649355 #
72. BoiledCabbage ◴[] No.27644498{8}[source]
It's pretty clear at this point you don't understand what dynamic dispatch means.

I don't think it's worthwhile for anyone else to argue with you further. C++ and Java both support dynamic dispatch although you deny it.

You've taken up almost a full page of HN arguing with everyone trying to explain it to you. People have pointed you to wikipedia showing you that you're wrong. [1]

ISOCPP of which Bjarne is a director [2] says that C++ supports dynamic dispatch. [3]

And you continue to attempt to argue that everyone on HN, Wikipedia and the creator of the C++ language are all wrong and don't know what dynamic dispatch is.

Your continued insistence is both wrong and a negative impact at this point on hn. Please stop arguing something that numerous people have taken lots of patience and charity in trying every way possible to explain to you and what is clearly factually wrong.

If you're going to reply, please explain why an organization that Bjarne Stroustrup is a director of believes that C++ supports dynamic dispatch.

1. https://en.wikipedia.org/wiki/Dynamic_dispatch#C++_implement...

2 https://isocpp.org/about

3. https://isocpp.org/wiki/faq/big-picture#why-use-oo

replies(3): >>27644551 #>>27645529 #>>27646404 #
73. ukj ◴[] No.27644614{9}[source]
Yes this is precisely what I am talking about.

Different implementations of eval() a.k.a different programming languages have different semantics.

Which implicit eval() implementation does the above hold?

What type system do you have in mind such that the above holds for ALL expressions. Type-checking itself is not always decidable.

74. yashap ◴[] No.27644651[source]
This matches my personal experience, for sure. I started out writing Python (pre-type-annotations) and JS, with lots of “raw” dicts/objects. For the past ~7 years though, I’ve written mostly Scala, but also a decent amount of TypeScript, Go and Java, and it’s completely transformed how I code, dramatically for the better.

Now, even in the rare case where I write some Python, JS or PHP, I write it in a very statically typed style, immediately parsing input into well-thought-out domain classes. And for backend services, I almost always go with 3 layers of models:

1) Data Transfer Objects. Map directly to the wire format, e.g. JSON or Protobuf. Generally auto-generated from API specs, e.g. using Open API Generator or protoc. A good API spec + code gen handles most input validation well

2) Domain Objects. Hand written, purely internal to the backend service, faithfully represent the domain. The domain layer of my code works exclusively with these. Sometimes there’s a little more validation when transforming a DTO into a domain model

3) Data Access Objects. Basically a representation of DB tables. Generally auto-generated from DB schemas, e.g. using libs like Prisma for TS or SQLBoiler for Go

Can’t imagine going back to the “everything is a dictionary” style for any decent sized project, it becomes such a mess so quickly. This style is a little more work up front, when you first write the code, but WAYYYYYY easier to maintain over time, fewer bugs and easier to modify quickly and confidently, with no nasty coupling of your domain models to either DB or wire format concerns. And code gen for the DTO and DAO layers makes it barely more up-front work.

75. ukj ◴[] No.27644679{7}[source]
So, there is nothing dynamic about that dispatch.

Because the implementation details of Foo are actually know at compile time. Which is why you are able to type-check it.

You have literally declared all allowed (but not all possible) implementations of Foo.

What happens when Foo() is a remote/network call?

replies(2): >>27644720 #>>27644901 #
76. detaro ◴[] No.27644720{8}[source]
so you are using a different definition of dynamic dispatch than the rest of the software industry.
replies(1): >>27644755 #
77. ukj ◴[] No.27644755{9}[source]
I am using a conception (NOT a definition) that is actually dynamic.

If you can type-check your dispatcher at compile time then there is nothing dynamic about it.

Decidable (ahead of time) means your function is fully determined. Something that is fully determined is not dynamic.

It is the conception computer scientists use.

78. dkersten ◴[] No.27644803{9}[source]
Just as a Turing machine requires an infinitely sized tape to compute any computable function, so too would brainfuck require an indirect sized tape (or whatever it’s called in BF) to compute any computable function. Since memory is finite, neither of these properties are actually available on real hardware.
79. justinpombrio ◴[] No.27644901{8}[source]
That is not what dynamic dispatch means! It is an extremely well established term, with a very clear meaning, and that is not what it means.

I thought you were just mistaken about something, but no, instead you've redefined a well understood term without telling anyone, then aggressively refused to clarify what you meant by it and argued for hours with people, while saying they were all wrong when they used the well established term to mean its well established meaning.

The thing you're talking about is an interesting concept, but it's not called dynamic dispatch, and you will confuse everyone you talk to if you call it that. I don't know if there's a term for it.

replies(2): >>27644974 #>>27648858 #
80. b3morales ◴[] No.27644924{4}[source]
Perhaps the Goldilocks mixture will be languages that allow type annotations but don't require it (e.g. Typescript, Elixir, Racket, and I think this is how Python's works).
replies(1): >>27646302 #
81. ukj ◴[] No.27644974{9}[source]
“Well established” doesn’t mean anything.

According to who?

Computer scientists talk about “well formed” not “well established”.

Those are categorical definitions.

replies(1): >>27645075 #
82. justinpombrio ◴[] No.27645075{10}[source]
> According to who?

Wikipedia, every textbook you can find, the top dozen search results for "dynamic dispatch", me who has a PhD in computer science plus all the other CS PhD people I know, everyone in my office who knows the term (who are industry people, not academia people), every blog post I have ever read that uses the term, and all the other HN commenters except you. I'm really not exaggerating; a lot of CS terms have disputed meanings but not this one.

EDIT: Sorry all for engaging the troll. I thought there might have been some legitimate confusion. My bad.

replies(2): >>27645217 #>>27647108 #
83. ukj ◴[] No.27645217{11}[source]
So which textbook contains the meaning of "meaning"?

Oh, that's recursive! Which is Computer Science's domain of expertise, not the public domain.

We are talking about formal semantics here. What do programs (and computer languages are themselves programs) mean?

Point 0 of Wadler's law.

https://en.wikipedia.org/wiki/Semantics_(computer_science)

If you can type-check it at compile time then it is NOT dynamic dispatch. It's a contextual confusion.

84. mixmastamyk ◴[] No.27645326{3}[source]
This is the best comment on the subject and should be at the top rather the current dogmatic ones.
85. ukj ◴[] No.27645393{7}[source]
I am using a perfectly sensible notion of "dynamic" (e.g NOT static) when I am talking about "dynamic dispatch".

Registering pointers to new implementations at runtime (adding entries to the dispatch table) Unregistering pointers to old implementations at runtime (removing entries from the dispatch table).

If your dispatch table is immutable (e.g static!), there's nothing dynamic about your dispatch!

86. ukj ◴[] No.27645469{9}[source]
I am interested in computation. Period. Not any particular model of computation (programming language); and not merely computation with functions from N->N.

Quoting from "http://math.andrej.com/2006/03/27/sometimes-all-functions-ar..."

"The lesson is for those “experts” who “know” that all reasonable models of computation are equivalent to Turing machines. This is true if one looks just at functions from N to N. However, at higher types, questions of representation become important, and it does matter which model of computation is used."

87. ukj ◴[] No.27645529{9}[source]
>If you're going to reply, please explain why an organization that Bjarne Stroustrup is a director of believes that C++ supports dynamic dispatch.

Because the meaning of "dynamic" is ambigous!

Since you are pointing me to wikipedia I'll point you right back...

https://en.wikipedia.org/wiki/Virtual_method_table#Compariso...

"Virtual method tables also only work if dispatching is constrained to a known set of methods, so they can be placed in a simple array built at compile time."

If the vtable is generated at compile and is constrained to a known set of methods then that array is immutable! Calling that "dynamic" is an obvious misnomer!

You are neither charitable nor patient. You are committing the bandwagon fallacy as we speak. You and the other 120 (and counting) angry downvoters ;)

I am using the word "dynamic" to actually mean dynamic! I am not going to define it. Use your judgment. Dynamic is NOT static. I am not asking you to "educate me", or to tell me I am right; or wrong. I am asking you to understand the sort of programming language design I have in mind!

Either you understand; or you don't.

88. garethrowlands ◴[] No.27645855{4}[source]
"type-related boiler plate"

That phrase makes me sad. Mainstream languages have a lot of scope for improvement in their type systems.

89. lolinder ◴[] No.27646280[source]
This argument is common, but I've never understood how a dynamically typed language is supposed to avoid coupling algorithms to data structures.

When using a data structure, I know what set of fields I expect it to have. In TypeScript, I can ask the compiler to check that my function's callers always provide data that meets my expectations. In JavaScript, I can check for these expectations at runtime or just let my function have undefined behavior.

Either way, if my function's assumptions about the data's shape don't turn out to be correct, it will break, whether or not I use a dynamic language.

It seems that most of the people who make this argument against static typing are actually arguing against violations of the Robustness Principle[0]: "be conservative in what you send, be liberal in what you accept".

A statically typed function that is as generous as possible should be no more brittle against outside change than an equally-generous dynamically typed function. The main difference is that the statically typed function is explicit about what inputs it has well-defined behavior for.

[0] https://en.wikipedia.org/wiki/Robustness_principle

replies(1): >>27651575 #
90. wpietri ◴[] No.27646302{5}[source]
Yeah, I've been using Python's gradual typing for a while. It's not perfect, but I'm excited for the possibilities. But the real test is to see what it's like on a large, long-lived project, so I'm keeping any open mind. I figure if that doesn't work fully, it'll still be a nice step toward things that can be pulled out as isolated services.
91. wpietri ◴[] No.27646330{4}[source]
Definitely. And for me the early stages of a product are often about buying information. "Users say they want X, so let's ship X and see." Key to exploring a product space is tight feedback loops between having an idea and seeing what people really do. It's only once I have enough active users (especially active paying users) to justify the project that I have some confidence about what "long term" really means for the code base.
92. LAC-Tech ◴[] No.27646404{9}[source]
> It's pretty clear at this point you don't understand what dynamic dispatch means.

Terms in computing are so overloaded that these days I try[0] and never correct anyone on how they use a term. Instead I ask them to define it, and debate off of that definition.

So instead of downvoting this guy for using different terminology - we can ask him what he means and just have a discussion.

[0] alright I don't always succeed but it's an ideal to strive for

replies(2): >>27646463 #>>27646699 #
93. ukj ◴[] No.27646463{10}[source]
This is a commendable approach.

Computation is a general, abstract and incredibly useful idea disconnected from any particular model of computation (programming language).

Different languages are just different models of computation and have different (desirable, or undesirable) semantic properties independent from their (trivial) syntactic properties.

It's this sort of angry dogmatism which prevents people from talking about programming language design.

Not for a second do they pause to think their own understanding may be limited.

replies(1): >>27646601 #
94. rowanG077 ◴[] No.27646516[source]
I'm one of those people. But the reasons I use dynamic languages is in spite of them being dynamic. I choose Python because it has insane library support, I don't like Python as a language though. I would instantly choose Haskell if it had even half the available libraries. But I can't justify having to write everything myself and take 10x as long.
95. Jtsummers ◴[] No.27646601{11}[source]
> It's this sort of angry dogmatism which prevents people from talking about programming language design.

The dogmatism demonstrated today was in your comments, ukj. Your inability to understand that your non-standard use of terms makes it impossible for others to communicate with you in any effective way made this a remarkable farce of a conversation or debate.

replies(3): >>27646642 #>>27647693 #>>27647802 #
96. ukj ◴[] No.27646642{12}[source]
There is no such thing as a "standard" model of computation, and therefore no such thing as "standard definition" or use of computation.

There is only the model you (and your tribe) believe is "standard" implicitly. Do you actually understand that?

I kinda thought navigating the inherent ambiguity of all language (programming or natural) is a fundamental skill for software engineers.

Communication is indeed impossible when you think you possess the "right" meaning of words.

Oh... by the way, somebody capable of understanding/listening has just informed me that what I am actually talking about is closer to multiple dispatch than dynamic dispatch.

Multiple dispatch is, in fact, more dynamic than "dynamic dispatch". Hilarity ensues.

97. Jtsummers ◴[] No.27646699{10}[source]
Dynamic dispatch is not terribly overloaded. It's dispatching based on run-time information instead of just compile-time information.

The problem in this discussion is that ukj has come to the belief (but communicated it poorly) that dynamic dispatch is somehow incompatible with static typing. And for some reason this also matters.

Static typing does not preclude dynamic dispatch, and despite being pointed to several mainstream languages that have both features, ukj decided to ignore reality or the common understanding of the phrase "dynamic dispatch" and produced this grotesque example of trying to communicate with an individual who is, apparently, just a troll. Feeding the troll, ukj, is probably the dumbest thing I did today, but I'll blame that on the insomnia reducing my ability to detect trolls.

replies(2): >>27646746 #>>27648312 #
98. ukj ◴[] No.27646746{11}[source]
>Dynamic dispatch is not terribly overloaded. It's dispatching based on run-time information instead of just compile-time information.

According to the above definition C++ does not have dynamic dispatch!

The vtable in C++ is generated at compile time and used at runtime. It's immutable at runtime. That means it is NOT dynamic!

https://en.wikipedia.org/wiki/Virtual_method_table#Compariso...

"Virtual method tables also only work if dispatching is constrained to a known set of methods, so they can be placed in a simple array built at compile time."

Like, I don't care how you use words, but you (and everyone) are using "dynamic" to speak about a statically-behaving system!

99. ukj ◴[] No.27647108{11}[source]
There is a legitimate confusion indeed and it seems to be on your part!

What I am talking about when I say "dynamic dispatch" is the sort of dispatching done by R, LISP and Julia (and not by C++ or Java). Now, we can bicker about labels and you can insist that it's actually called "multiple dispatch" and not "dynamic dispatch", but you can't bicker about the semantic fact that "multiple dispatch" is actually more dynamic than "dynamic dispatch".

And this semantic point would've been trivial to unpack if you weren't try to win an argument.

Indeed, sorry for engaging the trolls. 20; or 30 of you. Lost count.

100. touisteur ◴[] No.27647194{4}[source]
Yes I was thinking of something like that. Only I wish I could also know whether I is first and/or last without calling into the iterated structure. I know this looks like corner case syntactic sugar but it comes up a lot, e.g. when serializing to JSON. I guess I should write my own iterators but I want them everywhere...
101. touisteur ◴[] No.27647234{4}[source]
I like the cursor, because it abstracts indexes. Problem is, if I want the element, I still have to go fetch it (calling Element on the Cursor) and then I have to save it in a local variable and then I need to add a declare block for the constant standing for the result of Element and then the thing makes 5 lines instead of two and is not much more readable. Maybe I can just use Element() (especially since AdaCore seems to want to generalize dotted notation) repeatedly and have it inlined, but my past experience says 'expensive'...

And sadly Cursors haven't been generalized to all associative iterable structures in Ada (arrays for example).

102. specialist ◴[] No.27647266[source]
Ya, having 'foreach( null )' be a no-op is my #1 language feature request.

Not being a big fan of method chaining, a null saavy foreach would probably eliminate most of my null checks, need for Optional.

103. ukj ◴[] No.27647623{9}[source]
Hah! No wonder you don't grok my perspective.

If you derive a C++ compiler that accepts file A as valid C++. And I derive a C++ compiler that rejects file A as valid C++, then from the lens of type theory the two compilers have different type-signatures!

They are not the same formal language. One, or both compilers implement a language that is not C++.

104. ◴[] No.27647693{12}[source]
105. ukj ◴[] No.27647802{12}[source]
There's also no such thing as "standard use" of terms. Your dogma has a name. Linguistic prescriptivism ( https://en.wikipedia.org/wiki/Linguistic_prescription )

If there was a "standard use" languages wouldn't evolve and we wouldn't have so many of them. 8 billion people on Earth would be speaking The Standard Language!

Perhaps the thing you don't understand (over and above how to communicate) is this comic?

https://xkcd.com/927/

106. healsjnr1 ◴[] No.27648828[source]
In my own anecdotal experience, I think it comes down to what the product you are working on needs, and how your team works.

Recently, I spent 3 years on Scala then switched jobs and spent 3 years in Ruby.

It was a shock to go back to dynamic languages, but after 3 months, I honestly couldn't tell which felt more productive or led to more stable high quality product.

In Ruby, we had all the issues people point out about dynamic languages, but the product didn't lean heavily on complex data structures or algorithms. We embraced complexity and failure and get good processes, designs and practices to deal with this.

In Scala, we had more rigour, but I also know I spent a lot of time on type design. Once things were sorted there was a lot of confidence in it, but generally, it took a lot longer to get there.

For certain systems that is absolutely worth it, for others (and in my case) it did feel like the evolution of the product meant this effort never really paid off.

107. ukj ◴[] No.27648858{9}[source]
It turns out "dynamic dispatch" is not as well established or as clear as you insist. It means at least one of five things:

0. Dynamic dispatch (as you are using it) 1. Double dispatch 2. Multiple dispatch 3. Predicate dispatch 4. All of the above collectively.

What you thought was a confusing re-definition on my part might have been an ignorance of alternative uses on yours ;)

Java doesn't do 1, 2 or 3.

108. yawaramin ◴[] No.27648955{3}[source]
> But in practice, they're just a bad match for projects where the defining characteristic is unstable ground. They force artificial clarity when the reality is murky.

The rebuttal to this is provided by another post from Alexis King: https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-typ...

> This story sounds compelling, but it isn’t true. The flaw is in the premise: static types are not about “classifying the world” or pinning down the structure of every value in a system. The reality is that static type systems allow specifying exactly how much a component needs to know about the structure of its inputs, and conversely, how much it doesn’t.

Static typing helps you to safely model the parts of the system that you actually do know about, while allowing you to leave the unknown parts loosely defined. And when business requirements change, the compiler helps you to make those changes in a safe, guided way. This is why people often report the experience of doing large refactorings driven by compiler error messages, then running and finding that the change works correctly on the first try. A confidence few report in dynamically-typed languages.

> After doing a couple dozen, I'm seeing a pattern: developers using static tooling (e.g., Java, TypeScript) get circa half as much done as people using dynamic tooling (Python, plain JS).

I hope you understand and accept the fact that interview code challenges are not very representative of real-world software engineering. On-the-spot interview code is throwaway; real production code is not.

replies(1): >>27651904 #
109. tikhonj ◴[] No.27649355{4}[source]
Seems like static typing is neither necessary nor sufficient to cause the particular problems you mentioned.

Static types can absolutely help with more than local reasoning. One of the main reasons I like static types is that they give me a place in the code to reflect aspects of the architecture that otherwise go implicit. Databases are a great example: in both static and dynamic languages, the behavior of your code is going to depend on the database schemas you're using, and static types give you a mechanism to declare and manage this dependency explicitly. In a general sense, the key in architecture is how a system is split up into parts and what interfaces the parts have, and a static type system is a way to make those interfaces first-class citizens of a language subject to abstraction and automation.

Code with static types might be larger than the problem it's trying to solve, but so might code with dynamic types—that's more a function of the sort of abstractions and complexity that go into the code. I've written some code to do very similar things in Haskell and Python, and my experience is that the Haskell code tends to be noticeably more compact than the Python code, even though I make liberal use of types and write out all the top-level type signatures. While some of this comes down to other differences between the languages, part of Haskell's expressiveness absolutely comes down to static types—static types make it easier to write very general code against high-level abstractions (eg monads) that would need to be narrower in dynamic languages (eg promises).

And sure, you can couple code and data in static languages, but at least the types will help you when it comes time to change. I've worked with dynamically typed programs where functions rely on some mixture of nested tuples, hashmaps, lists... etc, and it's hard to understand exactly how concepts are represented in code, much less change that representation. If you represent some central notion in your code as some nested primitive type (a hashmap of tuples of ...) and you want to change that representation, you'll still have to update the places it's used in the code, but without a static type system you won't get a list of those places, and you won't have assurance that updated everywhere that mattered. I'm not sure I'm explaining this pattern well, but I've worked in JavaScript codebases where making breaking changes to core data components was basically impossible.

Point being, all the problems you mention come up in both static an dynamic languages. There might be a case that they're more common in one or the other, but it's not obvious either way, and it's going to depend a lot on which specific static type system you're thinking of.

110. ukj ◴[] No.27650975{3}[source]
In English there seems to be the eternal confusion between what things are and what we call them.

When a router does lookups from a static table it's "static routing".

When Java does lookups from a static table it's "dynamic dispatch".

The same type of computation is being characterised as both "static" and "dynamic".

When a router does lookups from a dynamic table it's "dynamic routing" - there is no equivalent in Java because making the dispatch table reflexive/mutable is precisely what violates type-safety!

replies(1): >>27654120 #
111. ukj ◴[] No.27651184{5}[source]
What C++, Haskell, Rust, Go etc. call "dynamic dispatch" is what a router calls "static routing". The defining characteristic is that the lookup table is pre-determined and immutable at runtime.

What routers call "dynamic routing" is having the lookup table mutable at runtime.

There can be no equivalent to that in type-safe languages because when you mutate the dispatch table you lose type-safety.

replies(1): >>27652121 #
112. throwaway346434 ◴[] No.27651575{3}[source]
If you are doing things with what is basically strings, as you find a lot of user input from web form inputs is, the advantages in having a lot of different strict types of Strings isnt huge. In these scenarios, using just basic types gets you a long way, because there is often very little business logic - route here, render that template, set these attributes.

Even a lot of JSON or XML parsing, you throw it into a parser and take what you need; if an unrelated field isnt what you expected, just move on with things rather than stop everything because the library author forgot about extension or openness possibilities (ie: an xs:any in a schema).

This attitude comes from the assumption not that types are unhelpful; just the chances we've modelled every outcome into our view of the world and gotten that right is unlikely.

When you get to complex systems with state changes to data and strict, well defined policies, rules engines, etc? Thats where dynamic languages often start adding all of that validation layer, to assert for right now you should act more like a type system and it's important - it probably has financial or security or other risks.

113. wpietri ◴[] No.27651904{4}[source]
In this case, the interview problem was a simplified version of a thing the team gets asked to do. So I believe it was as representative as possible for an hour's coding.

I disagree that "real production code" is not throwaway. In situations of high requirements volatility, a great deal of production code ends up getting thrown away, because either a) it's intended to be a short-lived way to learn about the domain, or b) it's built on an assumption that turns out to be incorrect. And until the project hits economic sustainability, a lot of assumptions necessarily get made. It's only once the project is economically sustainable that we have some confidence that the code will be around for the long haul.

If your point is that most people are using statically typed languages wrong for high-volatility situations, I'll take your word for it. There were definitely times Typescript users seemed to be unnecessarily constraining things, for example. But I'm not sold that the more elaborate approach of being thoughtfully and explicitly untyped for a project's early stages for the sake of prototyping in a statically typed language would yield much practical gain.

replies(1): >>27657484 #
114. kortex ◴[] No.27652121{6}[source]
> What routers call "dynamic routing" is having the lookup table mutable at runtime.

> There can be no equivalent to that in type-safe languages because when you mutate the dispatch table you lose type-safety.

That's exactly what I mean by escape hatches. Rust has unsafe. You can write a WTF FastInverseSqrt in Rust. You shouldn't 99% of the time. But you can.

Rust also has Box, Rc, Arc, and other tools. I'm not fluent enough in rust to know how to, but I'm quite confident and will eat my hat if you can't accomplish what is effectively a mutable vtable dispatch in Rust.

But also...why? Yeeting a function call into the void without any type knowledge seems way more bug-prone than interface/trait-based dispatch, to what benefit? Save developer time? I've spent countless dev-days of my life I won't get back debugging exactly this kind of dynamic-dispatched, json-dsl, frankly loosey-goosey untyped bullshit. There's so many better ways.

replies(1): >>27654964 #
115. kortex ◴[] No.27652144{7}[source]
> "All the time" is an english expression, please, look it up before going all-caps Python name mangle convention on me.

I know reaction comments are discouraged on HN, but this had me in stitches. Top-tier gourmet dig right there.

116. wabain ◴[] No.27654120{4}[source]
I don't think this analogy quite holds together. A router doing lookups from a table is implementing a static routing strategy from a control plane perspective; it's using statically configured values instead of using dynamic information about the network topology gleaned using a routing protocol like BGP. But an implementation of that strategy in terms of table lookups is dynamic—it's walking a data structure to retrieve values which were specified in the runtime configuration, not at compile time.

The reason that "dynamic dispatch" in Java, etc. is called that is that the instance of the dispatch table to use is chosen dynamically, rather than being fixed at the callsite. It's true that Java doesn't let the shape of the dispatch table change at runtime, but that's not what dynamic vs static refers to conventionally in this context. The ability to dynamically add and remove methods from a class is something which you typically only get in dynamically typed languages but dynamic dispatch and dynamic typing are not the same thing.

In particular, while a full-featured routing information base implementation will usually use some form of dynamic dispatch to customize the behavior of routes originated through different protocols, it's very uncommon for an implementation to rely on dynamic typing which adds or mutates the methods associated with different entities. That's simply a different kind of tool used for different purposes. It's something which can be helpful in object-relational mapping, for instance, because you can create methods based on a dynamic database schema. The RIB is not going to have a schema like that which changes at runtime.

replies(1): >>27654493 #
117. ukj ◴[] No.27654493{5}[source]
>But an implementation of that strategy in terms of table lookups is dynamic—it's walking a data structure to retrieve values which were specified in the runtime configuration, not at compile time.

That's precisely the point. You can specify part of the routing table at compile/configure time - the rest gets generated at runtime.

The data/control plane distinction is conceptual. It doesn't hold in memory when the router is handling its own network traffic - it has a single routing table/world-view.

My own routing table is shared by the data plane AND control plane.

At some point you will receive an external data (routing update) which requires runtime validation, you will do reflection and update your own routing table (ring 0 address space) based on external events.

>The RIB is not going to have a schema like that which changes at runtime.

The schema need not change. The entries/relations between objects changing is sufficient to violate type-safety.

Route add( *str1, *str2) to Number.add().

replies(1): >>27657552 #
118. dang ◴[] No.27656856{8}[source]
Would you please stop? Regardless of how right you are or feel you are, we don't want long, tedious flamewars like this and you did more than anyone else to prolong it. Dozens of comments is beyond excessive.

https://news.ycombinator.com/newsguidelines.html

119. yawaramin ◴[] No.27657484{5}[source]
> In this case, the interview problem was a simplified version of a thing the team gets asked to do. So I believe it was as representative as possible for an hour's coding.

That's exactly my point–code written under pressure during an hour-long code challenge will never be representative of code written during normal day-to-day coding. Even if the problem domain is similar to a real-world problem.

> I disagree that "real production code" is not throwaway. In situations of high requirements volatility, a great deal of production code ends up getting thrown away

But it's not written to be thrown away an hour later. That's not really the kind of 'production code' that any reasonable person would be talking about in this thread. If you're talking about exploratory data analysis, data science or whatever, sure, you can consider it 'production code', but it's not the product of the craft of software engineering, it's more about running quick experiments.

> I'm not sold that the more elaborate approach of being thoughtfully and explicitly untyped for a project's early stages for the sake of prototyping in a statically typed language would yield much practical gain.

There are many different kinds of projects. And people who work with good statically typed languages can prototype quite well using types to drive their domain modelling. It's a different way of working. You should read some of what Scott Wlaschin has written to get an idea.

replies(1): >>27658262 #
120. wabain ◴[] No.27657552{6}[source]
I mean, there's ample evidence on this thread to suggest we're not going to reach a productive conclusion here but I guess I'll keep biting.

It's not clear to me if you are suggesting that a dynamic routing table with different kinds of routes cannot be implemented in a type-safe manner in a statically typed language, or if you're working with an analogy where a routing table is like a dynamic programming language at runtime, in that a static set of entities and relations are known ahead of time and those are modified by runtime input. If it's the latter I'm not really sure how the analogy works—if programming languages are to routers as types are to routing entries, what in a router is analogous to a value of a given type?

I can speak more to the former possibility; here's a rough sketch of how one could implement a routing table using the tools available in a statically typed environment (and in a type-safe way). One way to do it (I believe the common way, and certainly the only one I've seen in commercial router implementations) is to treat statically populated and dynamically learned routes more or less uniformly in the data structures used to perform data-plane lookups. Each such route entry has the same fields and gets inserted into a data structure with a predefined shape. Where special behavior is needed for routes of different kinds, that behavior can be implemented by using dynamic dispatch in the sense it's usually used in C++, Java, Rust, etc. to call a method associated with a route entry, or using other techniques common to statically compiled languages—there is a fixed set of such operations defined up front. Adding and removing entries from the routing table at runtime does not typically implicate type safety because the types used to describe the table describe all of its possible valid states. For instance, the type for a node in a radix trie might describe how it can either be leaf node or contain subnodes, etc.

> The schema need not change. The entries/relations between objects changing is sufficient to violate type-safety. > > Route add( str1, str2) to Number.add().

It's obviously not always true that entries or relations changing will validate type safety; any non-trivial system will let you perform some kinds of data manipulation at runtime. Conventional static type systems will allow some kinds of mutations (like changing around pointers in a radix trie to insert a new node) but will not have the flexibility to support some others (like changing the shape of a dispatch table at runtime).

One kind of call pattern which is incompatible with statically compiled dynamic dispatch is where the types of parameters change along with the base type which owns the dispatch table; I think this is what your add() example is getting at—you need the type of the second parameter to match the first, which you can't validate without runtime checks if you don't know what concrete implementations will be in use at runtime. In the case of a routing table I don't think this kind of polymorphism is needed though; I can't think of an instance where an operation would fundamentally require a fixed relation in the concrete types of different routes. For instance, when routes overlap you can derive a priority value for each one to decide which one to use, rather than directly implementing some kind of function whichIsBetter(a, b) which relies on knowing what concrete route kinds a and b are.

replies(5): >>27658074 #>>27658294 #>>27658798 #>>27659025 #>>27660074 #
121. dragonwriter ◴[] No.27657615[source]
> It's weird how long it's taking for people to rediscover why strong static types were a good idea.

Its weird how long its taken for languages with static typing and type systems designed for correctness (and designed well for that end) rather than princupally for convenience of compilation to be available that are generally usable (considering licensing model, features, ecosystem, etc.)

122. ukj ◴[] No.27658074{7}[source]
The conclusion was my very first post really. Understand the limits of your tools; and your own limits - then make acceptable trade offs.

Programming is about explaining desires to computers. It is a complex, error-prone activity and we depend on our automated tooling to save us from ourselves.

Obviously, in theory you can build the exact same software in Assembly as you can with Java. But not in practice because the programming language is a human-computer interface and a symbiosis with useful feedback loops emerges at higher levels of abstraction.

The design choices of your language impose certain limits/discipline on your expressive power.

Some design patterns become easy to express; where others become difficult to express and your paradigm’s features become obstacles.

At which point it is your choice to switch off the guard rails.

Like the chap somewhere down below telling me he will use Rust “unsafe” so he can have direct control of his memory. Yes! You can! You can also read/write to /dev/mem directly!

Translated in English “I will turn off the compiler’s safety checks because it is getting in my way”. There are many ways to shoot yourself in the foot with this level of power.

He is agreeing with me, but this is HN and opposition must be maintained. Dung must be flung. For reasons.

Sure, for certain kinds of protocols this kind of polymorphism is not always needed. But I am merely pointing out the threshold where the type-safe paradigm begins to falter. The corner cases where the pros don’t outweigh the cons. Where the “compile time” and “runtime” distinction becomes a hindrance rather than a useful separation of concerns.

You can go further. Software Defined Networks. Application router.

In general, contexts in which it would be useful to know the shape of your data at runtime!

As a spontaneous aside… if your entire information-processing system is well-typed (top to bottom) - congratulations. You are well on your way to understanding Category Theory.

The entire data-schema of your stack is what Mathemaricians call a (small) “Category”.

123. wpietri ◴[] No.27658262{6}[source]
> but it's not the product of the craft of software engineering

Sure? This seems pretty no-true-Scotsman to me. If you want to define some the things I need to do as not real software, feel free. But they're still things my team has to get done. To me there's a spectrum of domain stability on which the code rests. It's frequently not possible to know in advance which bits will be long lived and which won't be. So whether or not one of htose bits is the true "craft of software engineering" in somebody's eyes has very little interest to me. Especially when it's an anonymous rando.

> people who work with good statically typed languages can prototype quite well using types to drive their domain modelling.

I can certainly believe that's possible. Hopefully some of them will interview with me so I can see how they work. I'm just reporting the data I have.

> that's not really the kind of 'production code' that any reasonable person

In startup contexts I will quite often try an experiment that is quick to build. A few hours is not at all uncommon. That you consider me unreasonable is a sign I can better spend my time elsewhere.

124. ukj ◴[] No.27658294{7}[source]
> I can't think of an instance where an operation would fundamentally require a fixed relation in the concrete types of different routes. For instance, when routes overlap you can derive a priority value for each one to decide which one to use, rather than directly implementing some kind of function whichIsBetter(a, b) which relies on knowing what concrete route kinds a and b are.

Any routing logic which makes routing decisions based on information nested arbitrarily deep into your packets.

Is this an IP packet containing data; or an IP packet containing an HTTP GET request to www.google.com?

You have to do arbitrarily deep type inference in real time.

The the data has no semantic content until you infer some! Is just a bitstream!

125. ukj ◴[] No.27658798{7}[source]
>If it's the latter I'm not really sure how the analogy works—if programming languages are to routers as types are to routing entries, what in a router is analogous to a value of a given type?

It's not an analogy. Programming languages and routers are particular instances of computable functions. "dispatching" and "routing" are just another example of us using different English words to describe the exact same computation: M:N mapping function.

Whether the input is mapped to an IP address or a memory address - boring implementation details.

Nothing in a router is analogous to a value of a type because there is no such thing as "types" at runtime unless you infer them! Types exist only at compile time. Types are semantic annotations of data. You are helping the compiler help you by telling it what you know about the data you are handling.

This blob encodes a Number. That blob encodes a String.

If you don't want any help from your compiler, you don't have to tell it the types of anything - just manipulate the data directly!

That's precisely what an Assembly language do. Everything is untyped.

126. ukj ◴[] No.27659025{7}[source]
>Where special behavior is needed for routes of different kinds, that behavior can be implemented by using dynamic dispatch in the sense it's usually used.

How? A routing function is precisely a M:N mapping with untyped input.

It's just some data somebody sent you! Route it.

Unless you can infer more about the meaning of those bits the only strategies possible are static(M), round-robin(M) or random(M).