Most active commenters
  • miningape(5)
  • IggleSniggle(3)

←back to thread

294 points NotPractical | 20 comments | | HN request time: 1.649s | source | bottom
Show context
xnorswap ◴[] No.41856752[source]
> Redbox.HAL.Configuration

> .ConfigurationFileService implements IConfigurationFileService

> STOP MAKING SERVICES AND FACTORIES AND INTERFACES AND JUST READ THE FUCKING

> JSON FILE YOU ENTERPRISE FUCKERS

I know it's cool to "hate" on OO, but "just read the fucking file" doesn't work if you want to run your unit tests without reading a fucking file.

It makes sense to abstract configuration behind an interface so you can easily mock it our or implement it differently for unit testing.

Perhaps you also want to have some services configured through a database instead.

This isn't a ConfigurationFileServiceFactoryFactory.

replies(12): >>41856822 #>>41856831 #>>41856836 #>>41856965 #>>41857895 #>>41858054 #>>41859117 #>>41859509 #>>41859750 #>>41859882 #>>41860221 #>>41864182 #
1. burnte ◴[] No.41860221[source]
> I know it's cool to "hate" on OO, but "just read the fucking file" doesn't work if you want to run your unit tests without reading a fucking file.

Then don't do that, if in the real world it'll read a fucking file, then test with reading a fucking file. Tests aren't there to just be passed, they're to catch problems and if they're not testing the same workflows that the code will see IRL then the test is flawed. The first test should be reading a fucking file and that fucking file could be full of all sorts of garbage.

Same goes for non-fucking files.

replies(4): >>41860439 #>>41860785 #>>41861793 #>>41862064 #
2. xnorswap ◴[] No.41860439[source]
Those are integration tests. Integration tests are great, but not when you want to run thousands of them in a few minutes. And not when you want to have lots running in parallel, accessing and potentially making "changes" to the same files.

I'm happy to have a long running integration test suite that runs on a build server.

But while working on a project, I need fast running unit tests that I can edit and run to get fast feedback on my work. I find that "time to iterate" is key to effective and enjoyable development. That's why hot module reloading is an amazing innovation for the front-end. The back-end equivalent is quickly running affected unit tests.

So I'd rather unit test my FooFileReader to make sure it can read parse (or not) what's in various files, and unit test my service which consumes the output of my FooFileReader by either parameterising the FooFile result or having an IFooFileReader injected. ( Either works to separate concerns. )

While unit testing, I'm going to test "given that System.IO.File can read a file", and write tests accordingly. I don't want a test sometimes fails because "read errors can happen IRL". That doesn't help test my business logic.

I can even test what happens if read failures do happen, because I can mock my mock IFooFileReader to return a FileNotFoundException or any other exception. I'd rather not have to force a real-world scenario where I'm getting such an error.

In a functional world, it's the difference between:

    function string -> result
and

    function string -> parsedType -> result
The second is cleaner and neater, and you can separately test:

    function string -> parsedType
    function parsedType -> result
The second is more testable, at the cost of being more indirect.

Interfaces and factories are just an idiomatic .NET way of doing this indirection over services and classes.

Of course you can also write more in a functional style, and there are times and places to do that too.

replies(5): >>41860567 #>>41861709 #>>41862005 #>>41862051 #>>41862404 #
3. consp ◴[] No.41860567[source]
Unit test are nice to have if you want to make test coverage or have sufficient time to implement them properly. In practice they contain only vague assumptions (the test passes, but the integration stops due to those assumptions being false) or contain things any basic code review should catch (and if you keep paying peanuts they won't do that so you make more unit tests).
replies(1): >>41860957 #
4. jessekv ◴[] No.41860785[source]
Yeah modeless software is one honking great idea. (RIP Larry Tesler)
5. jessekv ◴[] No.41860957{3}[source]
A good interface is testable, this is how you build up reliable abstractions to solve higher level problems. The devs on my team that take shortcuts here waste more time in the end.

There is no cost trade-off.

replies(2): >>41862483 #>>41862622 #
6. wtallis ◴[] No.41861709[source]
> While unit testing, I'm going to test "given that System.IO.File can read a file", and write tests accordingly. I don't want a test sometimes fails because "read errors can happen IRL".

That sounds pretty squarely in the "you ain't gonna need it" category. If your test harness cannot make a temporary directory and populate it with a copy of the test config file that's stored in the same SCM repo as the test case code, then you simply have a broken CI server. There's no need to complicate your codebase and make your tests less realistic all to avoid hypothetical problems that would almost certainly break your test suite before the test case gets around to attempting an fopen. Just read the damn file.

There are more complicated instances where mocking and dependency injection is needed. "fopen might fail on the CI server" usually isn't one of them.

7. scrapcode ◴[] No.41861793[source]
Yeah, that's called a fucking integration test.
8. miningape ◴[] No.41862005[source]
You're mixing definitions - integration tests concern testing the "integration" between all parts of a solution. It has nothing to do with reading a JSON file, its perfectly acceptable to read from a JSON file and use its data in a unit test.

Also reading / parsing a JSON file is fast enough for hot reloads / auto rerunning unless you have multiple GB files - so the argument for speed makes no sense. I'd argue it's slower as a whole to have to code up mocks and fill in the data than copy paste some json.

I do agree with the second being neater, however past a certain point of enterprise coding it's a negligible difference compared to the overall complexity of the code - so taking a shortcut and making your tests simpler through JSON files actually ends up being the cleaner / neater solution.

>While unit testing, I'm going to test "given that System.IO.File can read a file", and write tests accordingly. I don't want a test sometimes fails because "read errors can happen IRL". That doesn't help test my business logic.

Since you're given that - use it. If your test fails because a "low level" dependency is failing it's indicating something is seriously fucked up on your machine.

replies(1): >>41864392 #
9. rbanffy ◴[] No.41862051[source]
> And not when you want to have lots running in parallel, accessing and potentially making "changes" to the same files.

Reading a file is a fast operation these days. Re-reading a file shortly after a read is less than a memory copy.

Making the structure more complicated so that you can avoid reading a file during unit tests is a poor investment of resources - that complexity will haunt down the team forever.

10. miningape ◴[] No.41862064[source]
> Tests aren't there to just be passed, they're to catch problems

So many developers don't understand this simple concept - it manifests in 2 ways: 1. Not writing tests 2. Writing too many / too specific tests

Testing should always be focussed on the OUTCOMES never the implementation. That's why they're so good for making sure edge cases are covered - since we are able to assert the input and expected outcome of the code. I like to use the mental image that in an ideal world I could put the same tests on a completely separate implementation and it would still pass (mocks/stubs, and implementation specific tests don't pass this).

I'm always far more frustrated by 2 than by 1 - since 2 adds so much unnecessary code / complexity that doesn't need to be there, growing technical debt through the tool that should help us manage it. They make changing implementations painful. And worst of all they think they're doing something correctly and when combined with the sunk-cost fallacy they're incredibly resistant to changing these fucked up tests.

Don't get me wrong 1 is annoying too but he'll at least add the tests when you ask him to and not over engineer everything.

replies(1): >>41862500 #
11. neonsunset ◴[] No.41862404[source]
The vast majority of codebases that spam factories are misusing the pattern and simply add more boilerplate and abstraction bloat for something that is easily expressible in true idiomatic C# itself.

You see it everywhere where someone handrolls a "ServiceResolver" or "DtoMapper" that wrap what DI or ORM already handle on your behalf, simply because it is consistent with ancient badly written code that originates from practices that came from heavier Java and before that C++ codebases.

12. miningape ◴[] No.41862483{4}[source]
In most cases especially for important code paths I agree.

There is a case where I think it is justifiable to not write a single test: Startups. Specifically pre-seed & seed round funded I think are allowed to skip the majority of tests - however critical paths, especially those that are important to customers (i.e. transactions) must be tested.

By the time you have built out that mvp and have a few customers then you should transition to writing more tests. And as the number of engineers, scope, or complexity grows you need to add tests.

13. IggleSniggle ◴[] No.41862500[source]
There's a lot of room for nuance. If you "just read the fucking file" but the file isn't a "real" configuration file then isn't it just a "mock?" If you replace all network calls with an interceptor that forwards all calls and responses, and just check what's happening as a "listener," aren't you mocking out the network calls to a non-real implementation?

At the end of the day, tests are necessarily a mock-up of what's real. You just happen to disagree with where some people put the abstraction layer. I also would like to make my tests more "real" but I have a lot of sympathy for folks that are trying to test something smaller without involving eg a file. After all, the whole point of "everything is a file" in Unix is that we shouldn't need to worry about this detail, it's an OS concern. If you write to a file that's not actually a file on disk but actually a device, that it should fundamentally be okay and work as expected.

replies(1): >>41862666 #
14. CamperBob2 ◴[] No.41862622{4}[source]
It's testable right up until the point where it's asynchronously interactive.

Would unit tests have avoided the Therac-25 incident?

15. miningape ◴[] No.41862666{3}[source]
Yeah don't get me wrong, I'm not anti-mock - real code is messy, and the ideal of the same tests running everywhere will never work, so mocks are necessary. But I do think there's a lot more harm from over-mocking, than under-mocking.

> file isn't a "real" configuration file then isn't it just a "mock?"

I want to say "no" but I haven't thought about it enough yet. My reasoning is that the file itself contains information about the expected messages to/from systems, since it is the body of whatever the system should respond to. And while it is only 1 layer separated from just creating the same object in memory for your test this "feels" different because you can't just pull it out of your codebase into curl.

replies(1): >>41862925 #
16. IggleSniggle ◴[] No.41862925{4}[source]
Just to work this out together a little more in discussion form, since I appreciate your attitude:

Consider these two scenarios:

- read "test1-config.json" from disk, into whatever most easy JSON-adjacent format makes sense for your lang

- just use the JSON-adjacent format directly

Isn't the difference between these that one requires coupling input configuration of the environment to the tests (possibly inclusive of env vars and OS concerns around file I/o), making running the tests more confusing/complicated in aggregate, while the other requires coupling input configuration to the tests, making the unit under test clearer but potentially less reflective of the overall system?

Effectively this is just an argument between integration tests and unit tests. Unit testers certainly have the rhetorical upper hand here, but I think the grug-brained developers among us feel that "the whole program should be a pure function."

That can ultimately be reduced to a P-NP problem.

replies(1): >>41863338 #
17. miningape ◴[] No.41863338{5}[source]
yeah - I don't think we should go so far as to write a config file for a test. But if we have something that is already readily convertible to/from json, it should be used. Not seeing it so much as a config for a test but as an argument we're storing in a separate file.

For example if we had a dto that serialises to/from json we should be storing json not creating this dto manually - I would push it further and say any structure which is also easily/transformed from json, like extracting a certain property and using that in the test (although this is also context dependant, for example: if there are other tests using this same file). As a counter example I wouldn't advocate for using json config files to test something completely unrelated to an underlying json structure.

> That can ultimately be reduced to a P-NP problem

Yeah ideally the goal should be to write the simplest code possible, however we get there - shoehorning an approach is always going to add complexity. I think there's a lot of danger from taking rhetoric too far, sometimes we push an abstraction to its limits, when what's really required is a new perspective that works well at these limits.

Effectively I think there's a range in which any argument is applicable, its a matter of assessing if the range is large enough, the rules simple enough, and it solves the actual problem at hand.

replies(1): >>41864443 #
18. Izkata ◴[] No.41864392{3}[source]
It's an absurdly common mistake though, on the level of hungarian notation being misused and having to split it into two names.

Basically too many unit testing tutorials were simplified too far, so the vast majority of people think a "unit" is syntactic rather than semantic. Like, a single function rather than a single action.

19. Izkata ◴[] No.41864443{6}[source]
> yeah - I don't think we should go so far as to write a config file for a test. But if we have something that is already readily convertible to/from json, it should be used. Not seeing it so much as a config for a test but as an argument we're storing in a separate file.

This sounds like a great candidate for the facade pattern: https://en.m.wikipedia.org/wiki/Facade_pattern

Basically you hide dependencies behind a small interface, which lets you swap out implementations more easily. The facade is also part of your codebase rather than an external API, so it gives you something stable to mock. Rather than a building facade like the name is based on, I think of these as a stable foundation of things a module calls out to. Like your code is a box with an interface on one side (what tests and the rest of the codebase interact with) and the facade(s) are on the other side (dependencies/mocks of dependencies).

replies(1): >>41871794 #
20. IggleSniggle ◴[] No.41871794{7}[source]
I have seen over-reliance on the facade pattern devolve into endless indirection that can make the code needlessly confusing. If you are already familiar with the codebase, it doesn't seem like a big deal, but when you onboard, you'll find your new teammate combing through file after file after file just to discover, "oh, there's never any external API call or specific business logic involved, we were just reading a static json file from disk that does not change its content during a single run."

Using the known baked in stdlib functions for standard behavior removes a lot of potential uncertainty from your codebase (while also making it sometimes harder to test).