Most active commenters
  • yakshaving_jgt(6)
  • imiric(5)
  • troupo(5)
  • simianwords(4)

←back to thread

Delete tests

(andre.arko.net)
125 points mooreds | 29 comments | | HN request time: 0.461s | source | bottom
Show context
recursivedoubts ◴[] No.45071410[source]
One of the most important things you can do is move your tests up the abstraction layers and away from unit tests. For lack of a better term, to move to integration tests. End-to-end tests are often too far from the system to easily understand what's wrong when they break, and can overwhelm a development org. Integration tests (or whatever you want to call them) are often the sweet spot: not tied to a particular implementation, able to survive fairly significant system changes, but also easy enough to debug when they break.

https://grugbrain.dev/#grug-on-testing

replies(11): >>45071535 #>>45071726 #>>45071751 #>>45071944 #>>45072117 #>>45072123 #>>45072158 #>>45072321 #>>45072494 #>>45074365 #>>45080184 #
RHSeeger ◴[] No.45071726[source]
Integration tests and Unit tests are different tools; and each has their place and purpose. Using one "instead" of the other is a mistake.
replies(8): >>45072079 #>>45072176 #>>45072722 #>>45072873 #>>45073135 #>>45074394 #>>45080460 #>>45093392 #
1. simianwords ◴[] No.45072176[source]
Wow I hate this dogmatism. It is indeed better to use one instead of the other. Let’s stop pretending all are equally good and we need every type of test.

Sometimes you just don’t need unit tests and it’s okay to admit it and work accordingly.

replies(3): >>45072205 #>>45072404 #>>45072431 #
2. RHSeeger ◴[] No.45072205[source]
And sometimes you only need screws, instead of nails; or vice versa. But that doesn't invalidate the tool; it just means your use case doesn't need it.
replies(1): >>45072446 #
3. imiric ◴[] No.45072404[source]
You claim it's dogmatism, yet do the same thing in reverse. (:

Unit and integration tests test different layers of the system, and one isn't inherently better or more useful than the other. They complement each other to cover behavior that is impossible to test otherwise. You can't test low-level functionality in integration tests, just as you can't test high-level functionality in unit tests.

There's nothing dogmatic about that statement. If you disagree with it, that's your prerogative, but it's also my opinion that it is a mistake. It is a harmful mentality that makes code bases risky to change, and regressions more likely. So feel free to adopt it in your personal projects if you wish, but don't be surprised if you get push back on it when working in a team. Unless your teammates think the same, in which case, good luck to you all.

replies(1): >>45072649 #
4. CuriouslyC ◴[] No.45072431[source]
If you don't write unit tests, how do you know something works? Just manual QA? How long does that take you relative to unit tests? How do you know if something broke due to an indirect change? Just more manual QA? Do you really think this is saving you time?
replies(3): >>45072610 #>>45072748 #>>45073161 #
5. imiric ◴[] No.45072446[source]
You can't build a house without nails and screws, though.

Sure, if you're only writing a small script, you might not need tests at all. But as soon as that program evolves into a system that interacts with other systems, you need to test each component in isolation, as well as how it interacts with other systems.

So this idea that unit tests are not useful is coming from a place of laziness. Some developers see it as a chore that slows them down, instead of seeing it as insurance that makes their life easier in the long run, while also ensuring the system works as intended at all layers.

6. tsimionescu ◴[] No.45072610[source]
You can write many other other kinds of automated tests. Unit tests are rarely worth it, since they only look at the code in isolation, and often miss the forest for the trees if they're the only kind of test you have. But then, if you have other higher level tests that test your components are working well together, they're already implicitly covering that each component individually works well too - so your unit tests for that component are just duplicating the work the integration tests are already doing.
replies(1): >>45073627 #
7. tsimionescu ◴[] No.45072649[source]
The problem with this line of argument is that, in general, high level behavior (covered by integratuon tests) is dependent on low level behavior. So if your code is ascertained to work at the high level, you also know that it must be working at the lower level too. So, integration tests also tell you if your component works at a low level, not just a high level.

The converse is not true, however. It's perfectly possible for individual components to "work" well, but to not do the right thing from a high level perspective. Say, one component provides a good fast quicksort function, but the other component requires a stable sort to work properly - each is OK in isolation, but you need an integration test to figure out the mistake.

Unit tests are typically good scaffolding. They allow you to test bits of your infrastructure as you're building it but before it's ready for integration into the larger project. But they give you realtively little assurance at the project level, and are not worth it unless you're pretty sure you're building the right thing in the first place.

replies(3): >>45073029 #>>45073085 #>>45073466 #
8. ◴[] No.45072748[source]
9. imiric ◴[] No.45073029{3}[source]
> So if your code is ascertained to work at the high level, you also know that it must be working at the lower level too.

No, that is not guaranteed.

Integration and E2E tests can only cover certain code paths, because they depend on the input and output from other systems (frontend, databases, etc.). This I/O might be crafted in ways that never trigger a failure scenario or expose a bug within the lower-level components. This doesn't mean that the issue doesn't exist—it just means that you're not seeing it.

Furthermore, the fact that, by their nature, integration and E2E tests are often more expensive to setup and run, there will be fewer of them, which means they will not have full coverage of the underlying components. Another issue is that often these tests, particularly E2E and acceptance tests, are written only with a happy path in mind, and ignore the myriad of input that might trigger a failure in the real world.

Another problem with your argument is that it ignores that tests have different audiences. E2E and acceptance tests are written for the end user; integration tests are written for system integrators and operators; and unit tests are written for users of the API, which includes the author and other programmers. If you disregard one set of tests, you are disregarding that audience.

To a programmer and maintainer of the software, E2E and acceptance tests have little value. They might not use the software at all. What they do care about is that the function, method, object, module, or package, does what says on the tin; that it returns the correct output when given a specific input; that it's performant, efficient, well documented, and so on. These users matter because they are the ones who will maintain the software in the long run.

So thinking that unit tests are useless because they're a chore to maintain is a very shortsighted mentality. Instead, it's more beneficial to see them as guardrails that make your future work easier, by giving you the confidence that you're not inadvertently breaking an API contract whenever you make a change, even when all higher-level tests remain green across the board.

replies(2): >>45073178 #>>45080787 #
10. integralid ◴[] No.45073085{3}[source]
> So if your code is ascertained to work at the high level, you also know that it must be working at the lower level too

In the ideal world maybe. But It's very hard to test edge cases of a sorting algorithm with integration test. In general my experience is that algorithms and some complex but pure functions are worth writing unit tests for. CRUD app boilerplate is not.

replies(1): >>45073543 #
11. troupo ◴[] No.45073161[source]
> If you don't write unit tests, how do you know something works?

Integration tests. Unlike unit tests they actually test if something works.

replies(1): >>45073296 #
12. troupo ◴[] No.45073178{4}[source]
> This I/O might be crafted in ways that never trigger a failure scenario or expose a bug within the lower-level components.

You mean just like unit tests where every useful interaction between units is mocked out of existence?

> Furthermore, the fact that, by their nature, integration and E2E tests are often more expensive to setup and run, there will be fewer of them

And that's the main issue: people pretend that only unit tests matter, and as a result all other forms of testing are an afterthought. Every test harness and library is geared towards unit testing, and unit testing only.

replies(1): >>45073268 #
13. imiric ◴[] No.45073268{5}[source]
> You mean just like unit tests where every useful interaction between units is mocked out of existence?

Sure, that is a risk. But not all unit tests require mocking or stubbing. There may be plenty of pure functions that are worth testing.

Writing good tests requires care and effort, like any other code, regardless of the test type.

> And that's the main issue: people pretend that only unit tests matter, and as a result all other forms of testing are an afterthought.

Huh? Who is saying this?

The argument is coming from the other side with the claim that unit tests don't matter. Everyone arguing against this is saying that, no, all tests matter. (Let's not devolve into politics... :))

The idea of the test pyramid has nothing to do with one type of test being more important than another. It's simply a matter of practicality and utility. Higher-level tests can cover much more code than lower-level ones. In projects that keep track of code coverage, it's not unheard of for a few E2E and integration tests to cover a large percentage of the code base, e.g. >50% of lines or statements. This doesn't mean that these tests are more valuable. It simply means that they have a larger reach by their nature.

These tests also require more boilerplate to setup, external system dependencies, they take more time to run, and so on. It is often impractical to rely on them during development, since they slow down the write-test loop. Instead, running the full unit test suite and a select couple of integration and E2E tests can serve as a quick sanity check, while the entire test suite runs in CI.

Conversely, achieving >50% of line or statement coverage with unit tests alone also doesn't mean that the software works as it should when it interacts with other systems, or the end user.

So, again, all test types are important and useful in their own way, and help ensure that the software doesn't regress.

replies(1): >>45077919 #
14. yakshaving_jgt ◴[] No.45073296{3}[source]
This is utter nonsense.
replies(1): >>45077981 #
15. EliRivers ◴[] No.45073466{3}[source]
So if your code is ascertained to work at the high level, you also know that it must be working at the lower level too.

I have 100% seen bugs that cancel each other out; code that's just plain wrong at the lower level, coming together by chance to work at the higher level such that one or more integration tests pass. When one piece of that lower level code then gets fixed, either deliberately or because of a library update or hardware improvement or some other change that should have nothing to do with the functionality, and the top level integration tests starts failing, it can be so painful to figure it out.

I've also seen bugs that cancel either other out to make one integration test pass, but don't cancel each other out such that other integration tests fail. That can be a mindmelt; surely if THIS test works, then ALL THIS low level code must be correct, but simultaneously if THAT test fails, then ALL THIS low level code is NOT correct. At which point, people start wishing they had lower level tests.

16. MoreQARespect ◴[] No.45073543{4}[source]
Ive never in my life written a test for a sorting algorithm nor, im sure, will i ever need to.

The bias most developers have towards integration tests reflects the fact that even though we're often interviewed on it, it's quite rare that most developers actually have to write complex algorithms.

It's one of the ironies of the profession.

replies(1): >>45078221 #
17. skydhash ◴[] No.45073627{3}[source]
Sometimes you really need to ensure that something is a tree. And you do not need the whole forest around for that. Sure you can’t have an adventure with only a tree. But if you need a tree, you need to make sure someone don’t bring a concrete tree sculpture.
18. troupo ◴[] No.45077919{6}[source]
> Sure, that is a risk. But not all unit tests require mocking or stubbing.

Not all integrations require mocking or stubbing either. Yet somehow your argument against integration tests is that they somehow won't trigger failure scenarios.

> The argument is coming from the other side with the claim that unit tests don't matter.

My argument is that the absolute vast majority of unit tests are redundant and not required.

> The idea of the test pyramid has nothing to do with one type of test being more important than another. It's simply a matter of practicality and utility.

You're sort of implying that all tests are of equal importance, but that is not the case. Unit tests are the worst of all tests, and provide very little value in comparison to most other tests, and especially in comparison to how many unit tests you have to write.

> it's not unheard of for a few E2E and integration tests to cover a large percentage of the code base, e.g. >50% of lines or statements. This doesn't mean that these tests are more valuable.

So, a single E2E tests a scenario that covers >50% of code. This is somehow "not valuable" despite the fact that you'd often need up to a magnitude more unit tests covering the same code paths for that same scenario (and without any guarantees that the units tested actually work correctly with each other).

What you've shown, instead, is that E2E tests are significantly more valuable than unit tests.

However, true, E2E tests are often difficult to set up and run. That's why there's a middle ground: integration tests. You mock/stub out any external calls (file systems, API calls, databases), but you test your entire system using only exposed APIs/interfaces/capabilities.

> These tests also require more boilerplate to setup, external system dependencies, they take more time to run, and so on.

And the only reason for that is this: "people pretend that only unit tests matter, and as a result all other forms of testing are an afterthought." It shouldn't be difficult to test your system/app using it the way your users will use, but it always is. It shouldn't be able to mock/stub external access, but it always is.

That's why instead of writing a single integration test that tests a scenario across multiple units at once (at the same time testing that all units actually work with each other), you end up writing dozens of useless unit tests that test every single unit in isolation, and you often don't even know if they are glued together correctly until you get a weird error at 3 AM.

19. troupo ◴[] No.45077981{4}[source]
Unit tests test units in isolation.

Integration tests test that your system works. Testing how a system works covers the absolute vast majority of functionality you'd test with unit tests because you will hit the same code paths, and test the same behaviours you'd do with unit tests, and not in isolation.

This is a joke, but it's not: https://i.sstatic.net/yHGn1.gif

replies(1): >>45078040 #
20. yakshaving_jgt ◴[] No.45078040{5}[source]
I have been doing TDD for over a decade, and I don’t know why you’re trying to explain the basics to me.

Yes, you can exercise the same code paths with integrated tests as you might with unit tests. There are multiple approaches to driving integrated tests, from the relatively inexpensive approach of emulating a HTTP env, to something more expensive and brittle like Selenium. You could also just test everything with manual QA. Literally pay some humans to click through your application following a defined path and asserting outcomes. Every time you make a change.

Obviously all of these have different costs. And obviously, testing a pure function with unit tests (whether example based or property based) is going to be cheaper than testing the behaviour of that same function while incidentally testing how it integrates with its collaborators.

replies(1): >>45086045 #
21. yakshaving_jgt ◴[] No.45078221{5}[source]
I write parsers all the time.

Why wouldn’t you test parsers in isolation?

replies(1): >>45080767 #
22. simianwords ◴[] No.45080767{6}[source]
Sure but not everyone is working at this level. Dogmatically writing unit tests where they don’t bring much value is something that happens all the time and needs to stop.

No one actually evaluates whether unit tests are needed.

Unit tests at least in my experience, are needed sparingly - in specific places that encompass slightly complicated well contained logic.

replies(1): >>45080806 #
23. simianwords ◴[] No.45080787{4}[source]
> So thinking that unit tests are useless because they're a chore to maintain is a very shortsighted mentality. Instead, it's more beneficial to see them as guardrails that make your future work easier, by giving you the confidence that you're not inadvertently breaking an API contract whenever you make a change, even when all higher-level tests remain green across the board.

This is the kind of dogmatism I want people to understand. I’m not saying unit tests are useless but they have very narrow use, in units that encompass slightly complicated logic. Most of us write classes that just have a few for loops, if conditions, metrics and a few transformations. The overhead of writing a unit tests where, mocking all external services and continuously maintaining them when every small code change causes unit tests to break (false positives) is pretty high.

replies(1): >>45081993 #
24. yakshaving_jgt ◴[] No.45080806{7}[source]
I think parsing happens in more places than people might think.

https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...

replies(1): >>45092786 #
25. imiric ◴[] No.45081993{5}[source]
> Most of us write classes that just have a few for loops, if conditions, metrics and a few transformations.

You're describing code. At what point does code become "worthy" of a unit test? How do you communicate this to your team members? This type of ambiguity introduces friction and endless discussions in code reviews, to the point that abiding to the convention that all code should be unit tested whenever possible is a saner long-term strategy. This doesn't have to be a strict rule, but it makes sense as a general convention. Besides, these days with LLMs, writing and maintaining unit tests doesn't have to be a chore anymore. It's one thing the tech is actually reasonably good at.

What I think we fundamentally disagree about is the value of unit tests. That small function with a few for loops and if conditions still has users, which at the end of the day might be only yourself. You can't be sure that it's working as intended without calling it. You can do this either manually; automatically by the adjacent code that calls it, whether that's within an integration/E2E test or in production; or with automated unit tests. Out of those options, automated unit tests are the ones that provide the highest degree of confidence, since you have direct control over its inputs and visibility of its outputs. Everything else has varying degrees of uncertainty, which carries a chance of exposing an issue to end users.

Now, you might be fine with that uncertainty, especially if you're working on a solo project. But this doesn't mean that there's no value in having extensive coverage from unit tests. It just means that you're willing to accept a certain level of uncertainty, willing to tradeoff confidence for convenience of not having to write and maintain code that you personally don't find valuable, and willing to accept the risk of exposing issues to end users.

26. troupo ◴[] No.45086045{6}[source]
> You could also just test everything with manual QA. Literally pay some humans to click through your application following a defined path and asserting outcomes. Every time you make a change.

How to see if someone is arguing in bad faith? Well, they pretend that reductio ad absurdum is a valid argument

> Obviously all of these have different costs. And obviously, testing a pure function with unit tests (whether example based or property based) is going to be cheaper than testing the behaviour of that same function while incidentally testing how it integrates with its collaborators.

Let's see. A single scenario in an integration test:

- tests multiple code paths removing the need for multiple unit tests along that code path

- tests externally observable behaviour of the app/api/system is according to spec/docs

- tests that all units (that would otherwise be tested in isolation from each other) actually work together

This is obviously cheaper. Programmer (the expensive part) has to write less code, the system doesn't suddenly break because someone didn't wire units together (the insanely expensive part, 'cause everything was mocked in tests; unironically true story that hammered the final nail in the coffin of unit tests for me).

By the way, here's what Kent Beck has to say about unit tests: https://stackoverflow.com/a/153565

--- start quote ---

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence

--- end quote ---

replies(1): >>45086253 #
27. yakshaving_jgt ◴[] No.45086253{7}[source]
Feel free to try coming up with a single integrated test that tests all 16 paths through this parsing function.

https://news.ycombinator.com/item?id=45081378

> By the way, here's what Kent Beck has to say about unit tests

As I pointed out to you earlier, I've been doing TDD for a long time. I'm already plenty familiar with Kent Beck's writing.

---

I'm not convinced that you actually know what you're talking about. You've contradicted yourself a number of times when responding to me and to others. You construct straw men to argue against (who said everything needs to mocked in unit tests?). You've said "very few people write parsers", which is utter nonsense — parsing, whether you realise it or not, is one of the most common things you'll do as a working programmer. You've insisted that unit tests don't actually test that something works. You've created this false dichotomy where one has to choose between either isolated tests or integrated tests.

All I can say is good luck to you mate.

28. simianwords ◴[] No.45092786{8}[source]
yeah and if I it pops up I might write a unit test for it. i don't wanna be forced to write it for every damn thing.
replies(1): >>45093573 #
29. yakshaving_jgt ◴[] No.45093573{9}[source]
Who is forcing you?