←back to thread

294 points NotPractical | 1 comments | | HN request time: 0s | source
Show context
xnorswap ◴[] No.41856752[source]
> Redbox.HAL.Configuration

> .ConfigurationFileService implements IConfigurationFileService

> STOP MAKING SERVICES AND FACTORIES AND INTERFACES AND JUST READ THE FUCKING

> JSON FILE YOU ENTERPRISE FUCKERS

I know it's cool to "hate" on OO, but "just read the fucking file" doesn't work if you want to run your unit tests without reading a fucking file.

It makes sense to abstract configuration behind an interface so you can easily mock it our or implement it differently for unit testing.

Perhaps you also want to have some services configured through a database instead.

This isn't a ConfigurationFileServiceFactoryFactory.

replies(12): >>41856822 #>>41856831 #>>41856836 #>>41856965 #>>41857895 #>>41858054 #>>41859117 #>>41859509 #>>41859750 #>>41859882 #>>41860221 #>>41864182 #
burnte ◴[] No.41860221[source]
> I know it's cool to "hate" on OO, but "just read the fucking file" doesn't work if you want to run your unit tests without reading a fucking file.

Then don't do that, if in the real world it'll read a fucking file, then test with reading a fucking file. Tests aren't there to just be passed, they're to catch problems and if they're not testing the same workflows that the code will see IRL then the test is flawed. The first test should be reading a fucking file and that fucking file could be full of all sorts of garbage.

Same goes for non-fucking files.

replies(4): >>41860439 #>>41860785 #>>41861793 #>>41862064 #
miningape ◴[] No.41862064[source]
> Tests aren't there to just be passed, they're to catch problems

So many developers don't understand this simple concept - it manifests in 2 ways: 1. Not writing tests 2. Writing too many / too specific tests

Testing should always be focussed on the OUTCOMES never the implementation. That's why they're so good for making sure edge cases are covered - since we are able to assert the input and expected outcome of the code. I like to use the mental image that in an ideal world I could put the same tests on a completely separate implementation and it would still pass (mocks/stubs, and implementation specific tests don't pass this).

I'm always far more frustrated by 2 than by 1 - since 2 adds so much unnecessary code / complexity that doesn't need to be there, growing technical debt through the tool that should help us manage it. They make changing implementations painful. And worst of all they think they're doing something correctly and when combined with the sunk-cost fallacy they're incredibly resistant to changing these fucked up tests.

Don't get me wrong 1 is annoying too but he'll at least add the tests when you ask him to and not over engineer everything.

replies(1): >>41862500 #
IggleSniggle ◴[] No.41862500[source]
There's a lot of room for nuance. If you "just read the fucking file" but the file isn't a "real" configuration file then isn't it just a "mock?" If you replace all network calls with an interceptor that forwards all calls and responses, and just check what's happening as a "listener," aren't you mocking out the network calls to a non-real implementation?

At the end of the day, tests are necessarily a mock-up of what's real. You just happen to disagree with where some people put the abstraction layer. I also would like to make my tests more "real" but I have a lot of sympathy for folks that are trying to test something smaller without involving eg a file. After all, the whole point of "everything is a file" in Unix is that we shouldn't need to worry about this detail, it's an OS concern. If you write to a file that's not actually a file on disk but actually a device, that it should fundamentally be okay and work as expected.

replies(1): >>41862666 #
miningape ◴[] No.41862666[source]
Yeah don't get me wrong, I'm not anti-mock - real code is messy, and the ideal of the same tests running everywhere will never work, so mocks are necessary. But I do think there's a lot more harm from over-mocking, than under-mocking.

> file isn't a "real" configuration file then isn't it just a "mock?"

I want to say "no" but I haven't thought about it enough yet. My reasoning is that the file itself contains information about the expected messages to/from systems, since it is the body of whatever the system should respond to. And while it is only 1 layer separated from just creating the same object in memory for your test this "feels" different because you can't just pull it out of your codebase into curl.

replies(1): >>41862925 #
IggleSniggle ◴[] No.41862925[source]
Just to work this out together a little more in discussion form, since I appreciate your attitude:

Consider these two scenarios:

- read "test1-config.json" from disk, into whatever most easy JSON-adjacent format makes sense for your lang

- just use the JSON-adjacent format directly

Isn't the difference between these that one requires coupling input configuration of the environment to the tests (possibly inclusive of env vars and OS concerns around file I/o), making running the tests more confusing/complicated in aggregate, while the other requires coupling input configuration to the tests, making the unit under test clearer but potentially less reflective of the overall system?

Effectively this is just an argument between integration tests and unit tests. Unit testers certainly have the rhetorical upper hand here, but I think the grug-brained developers among us feel that "the whole program should be a pure function."

That can ultimately be reduced to a P-NP problem.

replies(1): >>41863338 #
miningape ◴[] No.41863338[source]
yeah - I don't think we should go so far as to write a config file for a test. But if we have something that is already readily convertible to/from json, it should be used. Not seeing it so much as a config for a test but as an argument we're storing in a separate file.

For example if we had a dto that serialises to/from json we should be storing json not creating this dto manually - I would push it further and say any structure which is also easily/transformed from json, like extracting a certain property and using that in the test (although this is also context dependant, for example: if there are other tests using this same file). As a counter example I wouldn't advocate for using json config files to test something completely unrelated to an underlying json structure.

> That can ultimately be reduced to a P-NP problem

Yeah ideally the goal should be to write the simplest code possible, however we get there - shoehorning an approach is always going to add complexity. I think there's a lot of danger from taking rhetoric too far, sometimes we push an abstraction to its limits, when what's really required is a new perspective that works well at these limits.

Effectively I think there's a range in which any argument is applicable, its a matter of assessing if the range is large enough, the rules simple enough, and it solves the actual problem at hand.

replies(1): >>41864443 #
Izkata ◴[] No.41864443{3}[source]
> yeah - I don't think we should go so far as to write a config file for a test. But if we have something that is already readily convertible to/from json, it should be used. Not seeing it so much as a config for a test but as an argument we're storing in a separate file.

This sounds like a great candidate for the facade pattern: https://en.m.wikipedia.org/wiki/Facade_pattern

Basically you hide dependencies behind a small interface, which lets you swap out implementations more easily. The facade is also part of your codebase rather than an external API, so it gives you something stable to mock. Rather than a building facade like the name is based on, I think of these as a stable foundation of things a module calls out to. Like your code is a box with an interface on one side (what tests and the rest of the codebase interact with) and the facade(s) are on the other side (dependencies/mocks of dependencies).

replies(1): >>41871794 #
1. IggleSniggle ◴[] No.41871794{4}[source]
I have seen over-reliance on the facade pattern devolve into endless indirection that can make the code needlessly confusing. If you are already familiar with the codebase, it doesn't seem like a big deal, but when you onboard, you'll find your new teammate combing through file after file after file just to discover, "oh, there's never any external API call or specific business logic involved, we were just reading a static json file from disk that does not change its content during a single run."

Using the known baked in stdlib functions for standard behavior removes a lot of potential uncertainty from your codebase (while also making it sometimes harder to test).