←back to thread

628 points kiyanwang | 4 comments | | HN request time: 1.877s | source
Show context
gwbas1c ◴[] No.43631365[source]
> Read the Reference

> Don’t Guess

I find that, when working with a new "thing," I often like to guess for about an hour or so before I really do a deep dive into the reference. Or, I'll read a stackoverflow answer or two, play around with it, and then go to reference.

Why?

Often there's a lot of context in the reference that only makes sense once I've had some hands-on time with whatever the reference is describing.

This is especially the case when learning a new language or API: I'll go through a tutorial / quickstart; "guess" at making a change; and then go back and read the reference with a better understanding of the context.

BTW: This is why I like languages and IDEs that support things like intellisense. It's great to be able to see little bits of documentation show up in my IDE to help me in my "guess" stage of learning.

replies(11): >>43631414 #>>43631448 #>>43631816 #>>43633123 #>>43633315 #>>43633373 #>>43634655 #>>43636297 #>>43636782 #>>43637333 #>>43638699 #
palmotea ◴[] No.43631448[source]
>> Read the Reference

>> Don’t Guess

> I find that, when working with a new "thing," I often like to guess for about an hour or so before I really do a deep dive into the reference. Or, I'll read a stackoverflow answer or two, play around with it, and then go to reference.

I think that's fair. I've definitely seen "not best" programmers only guess and only read stackoverflow, over and over, forever, and never read the reference. They have no idea what's going on and just spin making a mess in until something sticks. I kinda read that item as a response to people like that.

replies(1): >>43631964 #
9rx ◴[] No.43631964[source]
The trouble is that there is a strong correlation between being able to design good interfaces and being able to prepare good documentation. Meaning, where guessing fails, the reference is bound to also fail due to inaccuracies or failing to communicate what you need to know. Which stands to reason as both require concern for how the user perceives the product. That is a skill in its own right.

In practice, so many times I've spun my wheels thinking I just didn't understand the reference only to find out that there was a bug or change that invalidated the reference. Nowadays, if I must interface with a product built buy someone who doesn't understand the user, I'll go straight to the source code if guessing fails or resort to probing the system if code isn't available. Not only is it faster, but you'll gain a better understanding of what is going on than some stumbling attempt to describe it in natural language will ever be able to communicate.

replies(2): >>43632126 #>>43632837 #
palmotea ◴[] No.43632126[source]
> The trouble is that there is a strong correlation between being able to design good interfaces and being able to prepare good documentation. Meaning, where guessing fails, the reference is bound to also fail due to inaccuracies or failing to communicate what you need to know. Which stands to reason as both require concern for how the user perceives the product.

I think we're talking about different kinds of guessing. I'm not talking about skilled educated guessing, I'm talking about dumb, ignorant guessing. Like "I don't know anything, so I'm just going to try "stuff" I find online without really understanding. Those people do that even with the most beautiful interfaces with the best documentation.

But even with the best designed interfaces, not everything is discoverable (e.g. another fantastically designed but orthogonal interface in the same library that solves your problem).

replies(1): >>43632277 #
9rx ◴[] No.43632277[source]
> Like "I don't know anything, so I'm just going to try "stuff" I find online without really understanding.

A reasonable place to start. But fair that you can't stop there if it isn't working. Next step, in my opinion, is to look at the interface more closely to see if it provides any hints. It will most of the time if it is well designed.

> But even with the best designed interfaces, not everything is discoverable

Sure. That's what the test suite is for, though: To document for users full intent and usage. You're still not going to go to a reference for that. As an added bonus, it is self-validating, so none of the "is it me or is the reference incorrect?" rigamarole.

replies(1): >>43633054 #
1. palmotea ◴[] No.43633054[source]
>> Like "I don't know anything, so I'm just going to try "stuff" I find online without really understanding.

> A reasonable place to start. But fair that you can't stop there if it isn't working.

It's not a reasonable place to start. You're basically talking about copy-paste coding. Google search, stack overflow, paste in the first answer. Afterwards, ask the dev if they know what they did and why it works, and they won't be able to answer because they don't know.

> Next step, in my opinion, is to look at the interface more closely to see if it provides any hints. It will most of the time if it is well designed.

The people I'm taking about can't and won't do that.

> Sure. That's what the test suite is for, though: To document for users full intent and usage. You're still not going to go to a reference for that. As an added bonus, it is self-validating, so none of the "is it me or is the reference incorrect?" rigamarole.

I'm getting an "I don't need comments because code is self-documenting" vibe here. I disagree with that. Prose is a better way to express many, many things related to code than the code itself or even its test.

Sure, the code is the most authoritative place to find what was implemented, but it's not the best way to find the why or the concepts and thought behind it.

replies(1): >>43633193 #
2. 9rx ◴[] No.43633193[source]
> It's not a reasonable place to start.

Why not? If it works it works. Not everyone is concerned with receiving the award for best programmer.

> they won't be able to answer because they don't know.

I do understand that you are thinking of a specific person here, but broadly, you will know how it works more or less because you'll already know how you would implement yourself if you had to. But since someone's else code already did, no need to think about it further. This remains a reasonable place to start.

> but not the why

If you are not capturing "why" in your tests, what are you testing, exactly? The "what" is already captured in the implementation. You don't need that written down twice. Worse, if you do end up testing "what" you are bound to have to deal with broken tests every time you have to make a change. That is a horrid situation to find yourself in.

I do agree that writing useful tests is really hard, at least as hard as writing good reference material, and thus beyond the skill of most. But if you have to work with something built by the unskilled, all bets are off no matter which way you look.

replies(1): >>43637381 #
3. palmotea ◴[] No.43637381[source]
>> It's not a reasonable place to start.

> Why not? If it works it works. Not everyone is concerned with receiving the award for best programmer.

Ok, that clarifies things: programmers who avoid reading the docs to guess, or follow the "Google search, stack overflow, paste in the first answer" cycle are mediocre programmers. If they don't want to be good programmers (which what the article is talking about), they can keep doing what they're doing.

> If you are not capturing "why" in your tests, what are you testing, exactly?

You can't capture why in code. Your tests are a demonstration of the "what."

replies(1): >>43637778 #
4. 9rx ◴[] No.43637778{3}[source]
> ...are mediocre programmers.

That depends on the beholder.

- A programmer who applies a laundry list of what they do to determine who makes for a "best" programmer, who doesn't guess themselves, is likely to exclude anyone who does.

- A business person is apt to consider someone who successfully delivers a product quickly by using someone else's code among the "best".

> You can't capture why in code.

Then you can't capture it in natural language either, making this whole thing moot. But I disagree with that idea.

> Your tests are a demonstration of the "what."

You have a point that some testing frameworks carve out a special declaration for "example" tests that are marked for inclusion in generated API docs. There might be a time and place for that kind of documentation, but that isn't the kind of testing I was thinking of. That isn't representative of the vast majority of the tests you will write. If it is, you're doing something wrong – or at very least aren't being fair to those who will consume your tests later.

In my laundry list, concern for the next guy is what separates the "best" programmers from the mediocre. But I understand why your laundry list differs.