Most active commenters
  • xnorswap(7)
  • miningape(5)
  • IggleSniggle(4)
  • throwaway365x2(3)
  • stickfigure(3)

←back to thread

294 points NotPractical | 70 comments | | HN request time: 0.427s | source | bottom
1. xnorswap ◴[] No.41856752[source]
> Redbox.HAL.Configuration

> .ConfigurationFileService implements IConfigurationFileService

> STOP MAKING SERVICES AND FACTORIES AND INTERFACES AND JUST READ THE FUCKING

> JSON FILE YOU ENTERPRISE FUCKERS

I know it's cool to "hate" on OO, but "just read the fucking file" doesn't work if you want to run your unit tests without reading a fucking file.

It makes sense to abstract configuration behind an interface so you can easily mock it our or implement it differently for unit testing.

Perhaps you also want to have some services configured through a database instead.

This isn't a ConfigurationFileServiceFactoryFactory.

replies(12): >>41856822 #>>41856831 #>>41856836 #>>41856965 #>>41857895 #>>41858054 #>>41859117 #>>41859509 #>>41859750 #>>41859882 #>>41860221 #>>41864182 #
2. rkachowski ◴[] No.41856822[source]
most of these issues disappear with the introduction of first class functions. There's nothing noble about the thick indirection inherent in old school enterprise programming.
replies(1): >>41858756 #
3. xnorswap ◴[] No.41856831[source]
Could or should there just be a `IConfigurationService` instead of a separate IConfigurationFileService? Yes, probably.

"Interface all the things" is a bit lazy, but it's easy, especially if you have Moq as a way to auto-mock interfaces and a DI framework to setup factory methods.

But spinning into rage just because you see an interface or abstract factory isn't healthy.

replies(2): >>41856869 #>>41861534 #
4. guax ◴[] No.41856836[source]
Why do you need the interface? You can extend/mock the class itself. Refactoring code is easy and cheap. There is no reason for complex abstractions that protect implantation outside of libraries and frameworks.
replies(1): >>41862117 #
5. throwaway365x2 ◴[] No.41856869[source]
I don't follow .Net closely, but it seems like there should be a better alternative. Java has a library called "Mockito" that can mock classes directly without requiring an interface. I assume something similar exists for .Net, as they have similar capabilities. Making an interface for one class, just so another class can be tested seems like we allow the tool (tests) to determine the architecture of what it is testing. Adding complexity in the name of TDD is a close second on my list of triggers

There's nothing that triggers* me more than seeing an interface that only has one implementation. That's a huge code smell and often a result of pre-mature architecture design in my opinion. It also often leads to complexity where if you have an interface, you create a factory class/method to instantiate a "default" implementation. Fortunately it seems that it is not used as often as before. Our code has no factories and only a few interfaces, that actually have a practical use. The same applied to my previous workplace

* The trigger applies to 2024 Java code written as if it was 2004. I may have a form of PTSD after many years of interfaces and FactoryFactory, but fortunately times have changed. I don't see much of that today except in legacy systems/organizations.

replies(2): >>41856928 #>>41858730 #
6. xnorswap ◴[] No.41856928{3}[source]
I'm sure the same exists for .NET ( Moq can probably do it? ), but writing against an interface and having concrete implementations supplied by the DI framework is pretty much the ordained way to do things in .NET.

I used to be in the "Interfaces with only a single implementation is a code smell" camp, but I prefer to follow the principle of least surprise, so going with the flow and following the way the MS standards want you to do things makes it easier to onboard developers and get people up to speed with your code base. Save "Do it your own way" for those parts of the system that really requires it.

And technically the auto-generated mock is a second implementation, even if you never see it.

replies(2): >>41857064 #>>41857205 #
7. proikan ◴[] No.41856965[source]
Isn't ```dependency injection``` (aka passing arguments) the big thing that's supposed to solve this?

  Config config;
  // production
  config_from_file(&config, "config.json");
  run_production_stuff(&config);
  
  // unit tests
  Config config;
  config_from_memory(&config, &some_test_values);
  run_tests(&config);
replies(1): >>41857024 #
8. xnorswap ◴[] No.41857024[source]
Yes, and the typical pattern for .NET DI is to do so with interface based parameters.

So let's say you have a service FooService that requires some configuration.

( Ignoring the System.configuration namespace for now)

You'd have:

    class FooService(IConfigurationService ConfigurationService){
        // Access Configuration Through IConfigurationService
    }

Then elsewhere you'd set up your DI framework to inject your ConfigFileService to satisfy IConfigurationService in prod.

Yes, it can sometimes feel a bit like "turtles all the way down", where sometimes you just wish you had a bunch of concrete implementations.

In unit tests, you'd auto-mock IConfigurationService. For integration tests you might provide a different concrete resolution.

There are some advantages to service based DI though. The standard ASP.NET DI framework makes it trivially easy to configure it as a singleton, or per-request-lifetime, or per-instantiation, without having to manually implement singleton patterns.

This gives you good control over service lifetime.

replies(1): >>41857340 #
9. throwaway365x2 ◴[] No.41857064{4}[source]
I think you have good approach. I also tend to go with the flow and follow the common practice. If I tried to do "Java in C#", it would make it more difficult to follow my code and decrease maintainability.

I sometimes work on legacy C# code that we inherited from another team and I try to follow the style as close as possible. I just haven't invested enough time to make any informed decisions about how things should be.

replies(1): >>41857197 #
10. Jerrrrrrry ◴[] No.41857197{5}[source]
You are both over thinking it.

GIT? unit tests? and i thought debuggers spoiled us?

although cavemen-esque in comparison to 'modernity'; it wasn't a nightmare to Pause/resume program flow and carefully distill every suspected-erroneous call to Console.Log(e)/stdout/IO/alert(e)/WriteLine(e); `everything to find the fun/troublesome bits of one's program - instead of a tedious labyrinth of stack traces obfuscating out any useful information, further insulted by nearly un-googable compiler errors.

Tests were commented out functional calls with mock data.

If you never need to instantiate another instance of a structure so much so that it would benefit from an explicit schema for its use - whether it be an object or class inheritance or prototype chain - then sure, optimize it into a byte array, or even a proper Object/struct.

But if it exists / is instantiated once/twice, it is likely to be best optimized as raw variables - short-cutting OOP and it's innate inheritance chain would be wise, as well as limiting possibly OOP overhead, such as garbage collection.

  >interface in C#
Coincidentally, that is where my patience for abstraction for C# had finally diminished.

yield and generators gave off awkward syntatic-over-carmelized sugar smell as well - I saw the need, to compliment namespaces/access modifiers, but felt like a small tailored class would always outweigh the negligible time-save.

replies(1): >>41858351 #
11. Kuinox ◴[] No.41857205{4}[source]
Moq® cannot do it. I forked Moq® and made a library that can mock classes: https://github.com/Kuinox/Myna. It can do that by weaving the class you mock at compile time for your mocks (you can still use your class normally).
12. xnorswap ◴[] No.41857340{3}[source]
My example above is terrible, because in reality you'd have another level before this, which sorts out your global configuration, reads it and just injects service specific parameters and configuration for each service.

But I just wanted to illustrate idiomatic .NET DI, and on reflection picking configuration was probably the worst way to illustrate it.

13. abrookewood ◴[] No.41857895[source]
Kind of off topic, but can someone explain why else C# has factories and interfaces? Is it just mocking? I really don't understand the pattern at all. FWIW I am no dev.

EDIT: Found xnorswap's comment below about configuration, which makes sense I get - but as they mentioned, it does feel like "turtles all the way down".

replies(3): >>41858290 #>>41858837 #>>41860269 #
14. rcxdude ◴[] No.41858054[source]
But why is it so hard to read a file during a unit test? Files are pretty easy to mock in many different ways, all of which are pretty fast. You don't need a special-purpose interface to be able to test the code that uses a config file.
replies(3): >>41858083 #>>41858297 #>>41858623 #
15. wobblyasp ◴[] No.41858083[source]
It's not, but maybe I don't want to create a file for the tests. The point theyre trying to make is that it's a personal preference and not an obvious "this way is better"
16. FroshKiller ◴[] No.41858290[source]
A non-mocking use of mine: I have a factory with a method that returns instances of a particular interface for publishing typed events to a pub/sub service. The caller of the factory doesn't have to be updated with new concrete types as new events are added, because it's the factory's responsibility to create the events. The event types themselves just implement the interface that's required to serialize them for the pub/sub service.
17. stickfigure ◴[] No.41858297[source]
Let's say you want to test bootstrapping your system with various configurations.

You could make a few dozen different configuration files. Or maybe it's more than that because you want to test permutations. Now you're maintaining a bestiary.

So instead you think "I'll write code that generates the config file for each test". And that's reasonable sometimes.

On the other hand, the single-responsibility principle can be reasonably applied here and "reading the config data" is a good single responsibility. You don't want to have to write code to muck with json or xml every time some component needs a configuration value. So there should already be a clean boundary here and it often makes sense to test the components separately.

There's not one rule. The article author sounds like an excitable jr engineer that's never written software at scale.

replies(3): >>41859388 #>>41860580 #>>41861658 #
18. Bjartr ◴[] No.41858351{6}[source]
I love the idea of syntactic engagement gone too far as "burnt sugar"
19. xnorswap ◴[] No.41858623[source]
Perhaps a better example is a real world example I ran into just this week.

I found out that our unit test suite would only pass when run under elevated credentials. Our internal developer tooling had been running under semi-privileged credentials for years, and was the usual way of triggering a full unit test suite run, so no-one really noticed that it didn't work when run at a lower elevation.

When run from a lower privilege, a unit test was failing because it was failing to write to a registry key. I first double checked that I wasn't accidentally triggering integration tests, or that the test should be tagged integration.

But no, we had simply failed to abstract away our registry writes within that service. Of course no unit test should be writing to the real registry, but this settings manager was just being new'ed up as a concrete class, and there was no interface for it, and so it was just naively making registry edits.

This settings class wrote directly to the windows registry as it's data-store wasn't noticed as an issue for years, because all the times it had previously been run, it had been under credentials which could access that registry key.

And yes, there are different ways we could have mocked it, but favouring a concrete class meant this registry edit was happening unnoticed across all our unit test runs. And I suspect this might have been behind some of the dreaded flaky test syndrome, "I just tweaked a CSS file, why did my PR build fail?". Because 99% of times it was fast enough that it didn't cause issues, but with just the right concurrency of test execution, and you'd have a problem that wouldn't show up in any meaningful error message, just a failed test "for no reason".

Why shouldn't unit tests read real-world files? Because that introduces brittleness, and an inherent link between tests. If you want fast and easy to parallelize tests they need to have no real-world effects.

A test suite which is effectively pure can be executed orders of magnitude more quickly, and more reliably, than one which depends on:

  - Files

  - DateTime.Now (Anywhere in your code. A DateTimeFactory which you can mock might sound ridiculous, but it's perhaps the best thing you can do for your code if your current code / tests run on real dateTimes. Even for production code, having a DateTimeFactory can be really helpful for relieving some timing issues. )

  - Databases ( This is more "obvious", but still needs to be said! )
And so on. A unit test suite should boil down to essentially pure statements. Given inputs A,B,C, when applying functions f, g, h, then we expect results h(g(f(A,B,C))) to be X.

This can also be the difference between a test taking <1ms and taking <10ms.

As a final point, you're usually not wanting to "test the code that uses a config file", you want to test code which you don't care if it uses a config file.

The "Code that uses a config file" should be your Configurator class. What you actually want to test is some service which actually produces useful output, and that contains business logic.

Yes, "separation of concerns" can be taken too far, but having something else responsible for the manner in which your service is configured so that your service can just take in a business-domain relevant typed settings object is helpful.

As I've said elsewhere, config is actually a terrible example, because it's essentially a solved problem, MS released System.Configuration.ConfigurationManager ( https://www.nuget.org/packages/system.configuration.configur... ), and you should probably use it.

If you're not using that, you ought to have a good excuse. "Legacy" is the usual one of course.

replies(2): >>41860108 #>>41861574 #
20. pylua ◴[] No.41858730{3}[source]
What’s wrong with having an interface with one implementation ? It’s meant to be extended by code outside the current repo most likely. It’s not a smell in any sense.
replies(2): >>41859526 #>>41863712 #
21. gpderetta ◴[] No.41858756[source]
A fist class function is just an interface. Just ask the venerable master Qc Na.
22. criddell ◴[] No.41858837[source]
I've used them in the past to keep interface and implementation separate. It's an easy way to stick an adapter between something concrete and the thing that needs something but doesn't care where it's coming from.

So, for example, I could have a IGadgetStore with methods for creating, retrieving, updating, and deleting gadget instances and then I can have a bunch of different classes implementing that interface. An obvious example is to have a PostgresGadgetStore and a MysqlGadgetstore and CsvFileGadgetStore. If the user wants to implement their own store that I haven't written, they can.

replies(1): >>41864692 #
23. FrustratedMonky ◴[] No.41859117[source]
The thing that should be tested is whatever you are handing to a client app to use. If it is an interface, then test the interface, not the file behind it. If the client will get the file, then test that the file is correct.

So in this case, is this entire file being handed to clients to do what they will with it? Does that make sense as an interface.

If you are building an app, and you want other parts of the app, or clients of the app, to use this data, does it make sense to just hand over the entire file as the interface.

Basically :

Programmer 1: "Hey man, show me your data interface so I can build a quick lookup on something"

Programmer 2: "here is the whole damn file, do whatever you want and leave me alone. Figure it out yourself, and no I'm not going to give you a schema and I'll change it whenever I want".

24. dogleash ◴[] No.41859388{3}[source]
> that's never written software at scale.

Is this like a never version of that insult where people would say someone's opinion doesn't matter because they worked on a project that never shipped (regardless of how much or how little they contributed to the failure)? Just replacing it with an AWS bill-measuring contest?

replies(1): >>41861108 #
25. sigseg1v ◴[] No.41859509[source]
You are right. I read their posts as the ramblings of someone who is currently in shock, found a bunch of bad practices in the logging + data retention, and is now just tongue-in-cheek mocking (the puns...) everything even if they don't have much experience with it. I would probably say something similarly incorrect if I found some perl and tried to understand it because I know nothing about writing maintainable perl.
replies(1): >>41862062 #
26. klibertp ◴[] No.41859526{4}[source]
90% of single-implementation interfaces (in Kotlin on Android projects I've seen) are internal (package/module private, more or less.) So no, they are not meant to be extended or substituted, and tests are their only raison d'etre (irony: I've almost never seen any actual tests...) This is insane because there are other tools you can use for testing, like an all-open compiler plugin or testing frameworks that can mock regular classes without issues.

An interface with a single implementation sometimes makes sense, but in the code I've seen, such things are cludges/workarounds for technical limitations that haven't been there for more than a decade already. At least, it looks that way from the perspective of a polyglot programmer who has worked with multiple interface-less OOP languages, from Smalltalk to Python to C++.

27. cruffle_duffle ◴[] No.41859750[source]
I am lost is that a code comment or the author commenting on something they found?

Because I always enjoy leaked code comments. It’s like “tell me how you really feel about this shitty bloated enterprise framework you are using”.

There were some good ones in the leaked windows source code, weren’t there?

replies(2): >>41859812 #>>41863473 #
28. robrtsql ◴[] No.41859812[source]
That's the author commenting on something that they found.
29. aidenn0 ◴[] No.41859882[source]
If you need to make your code more baroque and harder to understand in order to unit-test it, that seems like the tail wagging the dog.
replies(2): >>41860371 #>>41864272 #
30. shagie ◴[] No.41860108{3}[source]
> DateTime.Now

I've got this in several places.

I have code that needs to check if other date is within two weeks of today. I have test data.

I could either modify the test data based on today's date (adding other logic to tests that itself could be faulty), do something while loading the test data... or have the date to be used for comparisons be injected in.

That date is July 1, 2018. It was selected so that I didn't have difficulty with reasoning about the test data and "is this a leap year?" or across a year boundary on January 1.

Its not a "I don't trust it to work across those conditions" but rather a "it is easier to reason about what is 60 days before or after July 1 than 60 days before or after January 1.

And returning to the point - injectable dates for "now" are very useful. Repeatable and reasonable tests save time.

31. burnte ◴[] No.41860221[source]
> I know it's cool to "hate" on OO, but "just read the fucking file" doesn't work if you want to run your unit tests without reading a fucking file.

Then don't do that, if in the real world it'll read a fucking file, then test with reading a fucking file. Tests aren't there to just be passed, they're to catch problems and if they're not testing the same workflows that the code will see IRL then the test is flawed. The first test should be reading a fucking file and that fucking file could be full of all sorts of garbage.

Same goes for non-fucking files.

replies(4): >>41860439 #>>41860785 #>>41861793 #>>41862064 #
32. nonameiguess ◴[] No.41860269[source]
I don't think it's off-topic at all. The ideal this kind of thing is trying to achieve is separation of concern. One developer or team or even entire organization is writing things like serializers for specific kinds of file formats or other sources of persisted data like databases and environment variables or things like the Java Springboot externalized config. Another organization is just trying to create an application that requires configuration. They don't necessarily want to have to worry too much about where it comes from. Especially in places that strictly separate development and operations, they'll probably have no say anyway and it'll change over time and they're not gonna want to change their own code when it does.

You can analogize this to non-software use cases. I've got phillips head and flathead screwdrivers and ideally don't want to have to worry about the specific qualities of a particular kind of screw when selecting one. It either has one slot on the head or two. That's the interface to the screw and it should be the only thing I have to worry about when selecting a screwdriver.

Unfortunately, this kind of thing can balloon out of control, and in the worst kinds of "enterprise" Java shops I was involved in deep into my past, where concrete classes were injected at runtime by xml file loaded into the framework, it was literally impossible to tell what code was going to do simply by reading it, because it is impossible to know what is being injected at runtime except by inspecting it during runtime. It's a pretty frustrating experience when reading an entire code base doesn't tell you what the code actually does.

replies(1): >>41862760 #
33. mempko ◴[] No.41860371[source]
Exactly! It's like that Skinner Simpsons meme. Are unit tests the problem and I'm wasting my time? No, it's the config files that are wrong.
34. xnorswap ◴[] No.41860439[source]
Those are integration tests. Integration tests are great, but not when you want to run thousands of them in a few minutes. And not when you want to have lots running in parallel, accessing and potentially making "changes" to the same files.

I'm happy to have a long running integration test suite that runs on a build server.

But while working on a project, I need fast running unit tests that I can edit and run to get fast feedback on my work. I find that "time to iterate" is key to effective and enjoyable development. That's why hot module reloading is an amazing innovation for the front-end. The back-end equivalent is quickly running affected unit tests.

So I'd rather unit test my FooFileReader to make sure it can read parse (or not) what's in various files, and unit test my service which consumes the output of my FooFileReader by either parameterising the FooFile result or having an IFooFileReader injected. ( Either works to separate concerns. )

While unit testing, I'm going to test "given that System.IO.File can read a file", and write tests accordingly. I don't want a test sometimes fails because "read errors can happen IRL". That doesn't help test my business logic.

I can even test what happens if read failures do happen, because I can mock my mock IFooFileReader to return a FileNotFoundException or any other exception. I'd rather not have to force a real-world scenario where I'm getting such an error.

In a functional world, it's the difference between:

    function string -> result
and

    function string -> parsedType -> result
The second is cleaner and neater, and you can separately test:

    function string -> parsedType
    function parsedType -> result
The second is more testable, at the cost of being more indirect.

Interfaces and factories are just an idiomatic .NET way of doing this indirection over services and classes.

Of course you can also write more in a functional style, and there are times and places to do that too.

replies(5): >>41860567 #>>41861709 #>>41862005 #>>41862051 #>>41862404 #
35. consp ◴[] No.41860567{3}[source]
Unit test are nice to have if you want to make test coverage or have sufficient time to implement them properly. In practice they contain only vague assumptions (the test passes, but the integration stops due to those assumptions being false) or contain things any basic code review should catch (and if you keep paying peanuts they won't do that so you make more unit tests).
replies(1): >>41860957 #
36. pphysch ◴[] No.41860580{3}[source]
> Now you're maintaining a bestiary.

Any battle-hardened test suite is already a bestiary. Having a subfolder of diverse & exemplary config files, that could be iterated over, is not adding much to the pile.

replies(1): >>41861608 #
37. jessekv ◴[] No.41860785[source]
Yeah modeless software is one honking great idea. (RIP Larry Tesler)
38. jessekv ◴[] No.41860957{4}[source]
A good interface is testable, this is how you build up reliable abstractions to solve higher level problems. The devs on my team that take shortcuts here waste more time in the end.

There is no cost trade-off.

replies(2): >>41862483 #>>41862622 #
39. stickfigure ◴[] No.41861108{4}[source]
"Software at scale" is different from "data at scale" is different from "compute at scale".

But yeah, when I hear "STOP MAKING SERVICES AND FACTORIES AND INTERFACES AND JUST READ THE FUCKING JSON FILE YOU ENTERPRISE FUCKERS" I think "developer who's never worked on anything more complicated than a chat app, and isn't old enough to have learned humility yet".

replies(1): >>41862509 #
40. gregmac ◴[] No.41861534[source]
Yeah, IConfigurationService implies separation of concern. Code using it doesn't have to care where the configuration came from, just that it is there. Someone separately can write the concrete ConfigurationFileService:IConfigurationService that reads/parses files.

IConfigurationFileService implies abstraction of file system-based configuration. Are we planning that there's going to be a different way to read configuration files in the future, and what exactly is that? If no one can articulate it, it just seems like architecture astronautism and: YAGNI.

IConfigurationService makes writing unit tests for anything that uses it way easier, too. There can be a simple TestConfigurationService:IConfigurationService that just implements everything as settable, and in your test code you can provide exactly the properties you need (and nothing more), and easily have 100 variations of configs to ensure your code is working. Without the headache of dealing with actual files separate from your test code, or worse, shared with other test code.

I've actually written multiple long-lived pieces of software this way, and more than once ended up implementing stuff like environment variable-based configuration, REST API-sourced configuration, and even aggregations that combine multiple sources, eg:

    new AggregateConfig(new ServerConfig("https://whatever"), new EnvironmentConfig(), new FileConfig("/some/path.config"));
All that code that used IConfigurationService is completely untouched and unaware of any of this, letting whoever is doing this as part of changing deployment (or whatever) be productive quickly with very little knowledge of the rest of the (possibly massive) app.
41. ◴[] No.41861574{3}[source]
42. stickfigure ◴[] No.41861608{4}[source]
A totally reasonable approach! Sometimes.
43. nucleardog ◴[] No.41861658{3}[source]
The interface in question is `IConfigurationFileService`. I can only guess at the actual interface, but based on the name it doesn't sound like it's abstracting away the need to put your configuration into files.

Could just be a case of bad naming and it solves everything you're saying. But it sounds like pointless enterprise-y fuckery to me.

I would not say the same thing about `ConfigurationFileService : IConfigurationLoader` or something.

44. wtallis ◴[] No.41861709{3}[source]
> While unit testing, I'm going to test "given that System.IO.File can read a file", and write tests accordingly. I don't want a test sometimes fails because "read errors can happen IRL".

That sounds pretty squarely in the "you ain't gonna need it" category. If your test harness cannot make a temporary directory and populate it with a copy of the test config file that's stored in the same SCM repo as the test case code, then you simply have a broken CI server. There's no need to complicate your codebase and make your tests less realistic all to avoid hypothetical problems that would almost certainly break your test suite before the test case gets around to attempting an fopen. Just read the damn file.

There are more complicated instances where mocking and dependency injection is needed. "fopen might fail on the CI server" usually isn't one of them.

45. scrapcode ◴[] No.41861793[source]
Yeah, that's called a fucking integration test.
46. miningape ◴[] No.41862005{3}[source]
You're mixing definitions - integration tests concern testing the "integration" between all parts of a solution. It has nothing to do with reading a JSON file, its perfectly acceptable to read from a JSON file and use its data in a unit test.

Also reading / parsing a JSON file is fast enough for hot reloads / auto rerunning unless you have multiple GB files - so the argument for speed makes no sense. I'd argue it's slower as a whole to have to code up mocks and fill in the data than copy paste some json.

I do agree with the second being neater, however past a certain point of enterprise coding it's a negligible difference compared to the overall complexity of the code - so taking a shortcut and making your tests simpler through JSON files actually ends up being the cleaner / neater solution.

>While unit testing, I'm going to test "given that System.IO.File can read a file", and write tests accordingly. I don't want a test sometimes fails because "read errors can happen IRL". That doesn't help test my business logic.

Since you're given that - use it. If your test fails because a "low level" dependency is failing it's indicating something is seriously fucked up on your machine.

replies(1): >>41864392 #
47. rbanffy ◴[] No.41862051{3}[source]
> And not when you want to have lots running in parallel, accessing and potentially making "changes" to the same files.

Reading a file is a fast operation these days. Re-reading a file shortly after a read is less than a memory copy.

Making the structure more complicated so that you can avoid reading a file during unit tests is a poor investment of resources - that complexity will haunt down the team forever.

48. dylan604 ◴[] No.41862062[source]
> maintainable perl

isn't that an oxymoron?

49. miningape ◴[] No.41862064[source]
> Tests aren't there to just be passed, they're to catch problems

So many developers don't understand this simple concept - it manifests in 2 ways: 1. Not writing tests 2. Writing too many / too specific tests

Testing should always be focussed on the OUTCOMES never the implementation. That's why they're so good for making sure edge cases are covered - since we are able to assert the input and expected outcome of the code. I like to use the mental image that in an ideal world I could put the same tests on a completely separate implementation and it would still pass (mocks/stubs, and implementation specific tests don't pass this).

I'm always far more frustrated by 2 than by 1 - since 2 adds so much unnecessary code / complexity that doesn't need to be there, growing technical debt through the tool that should help us manage it. They make changing implementations painful. And worst of all they think they're doing something correctly and when combined with the sunk-cost fallacy they're incredibly resistant to changing these fucked up tests.

Don't get me wrong 1 is annoying too but he'll at least add the tests when you ask him to and not over engineer everything.

replies(1): >>41862500 #
50. jjice ◴[] No.41862117[source]
> You can extend/mock the class itself. Refactoring code is easy and cheap. There is no reason for complex abstractions that protect implantation outside of libraries and frameworks.

"Mock" can be a loaded word in this context, so please excuse me if I'm looking at it through a difference lens, but if you're using some sort of mocking set of tooling (like Jest or similar), I'd argue that those mocks are much more confusing than an interface with an implementation.

I personally love an interface because it defines the most narrow set of operations an object needs to support to be passed in, and the implementation of those are completely irrelevant for calling. In many cases, I personally find that a lot simpler and cleaner to read.

51. neonsunset ◴[] No.41862404{3}[source]
The vast majority of codebases that spam factories are misusing the pattern and simply add more boilerplate and abstraction bloat for something that is easily expressible in true idiomatic C# itself.

You see it everywhere where someone handrolls a "ServiceResolver" or "DtoMapper" that wrap what DI or ORM already handle on your behalf, simply because it is consistent with ancient badly written code that originates from practices that came from heavier Java and before that C++ codebases.

52. miningape ◴[] No.41862483{5}[source]
In most cases especially for important code paths I agree.

There is a case where I think it is justifiable to not write a single test: Startups. Specifically pre-seed & seed round funded I think are allowed to skip the majority of tests - however critical paths, especially those that are important to customers (i.e. transactions) must be tested.

By the time you have built out that mvp and have a few customers then you should transition to writing more tests. And as the number of engineers, scope, or complexity grows you need to add tests.

53. IggleSniggle ◴[] No.41862500{3}[source]
There's a lot of room for nuance. If you "just read the fucking file" but the file isn't a "real" configuration file then isn't it just a "mock?" If you replace all network calls with an interceptor that forwards all calls and responses, and just check what's happening as a "listener," aren't you mocking out the network calls to a non-real implementation?

At the end of the day, tests are necessarily a mock-up of what's real. You just happen to disagree with where some people put the abstraction layer. I also would like to make my tests more "real" but I have a lot of sympathy for folks that are trying to test something smaller without involving eg a file. After all, the whole point of "everything is a file" in Unix is that we shouldn't need to worry about this detail, it's an OS concern. If you write to a file that's not actually a file on disk but actually a device, that it should fundamentally be okay and work as expected.

replies(1): >>41862666 #
54. ◴[] No.41862509{5}[source]
55. CamperBob2 ◴[] No.41862622{5}[source]
It's testable right up until the point where it's asynchronously interactive.

Would unit tests have avoided the Therac-25 incident?

56. miningape ◴[] No.41862666{4}[source]
Yeah don't get me wrong, I'm not anti-mock - real code is messy, and the ideal of the same tests running everywhere will never work, so mocks are necessary. But I do think there's a lot more harm from over-mocking, than under-mocking.

> file isn't a "real" configuration file then isn't it just a "mock?"

I want to say "no" but I haven't thought about it enough yet. My reasoning is that the file itself contains information about the expected messages to/from systems, since it is the body of whatever the system should respond to. And while it is only 1 layer separated from just creating the same object in memory for your test this "feels" different because you can't just pull it out of your codebase into curl.

replies(1): >>41862925 #
57. IggleSniggle ◴[] No.41862760{3}[source]
> it was literally impossible to tell what code was going to do simply by reading it, because it is impossible to know what is being injected at runtime except by inspecting it during runtime. It's a pretty frustrating experience when reading an entire code base doesn't tell you what the code actually does.

I've worked in "legacy" nodejs code since node 0.x. Glad to hear that there might be hope of codebases that don't have this problem. I thought typescript would help, but I've learned that fancy generics can ensure that it's still quite possible to have no idea what something will actually do in a real world environment, you'll just have a lot more cognitive overhead in wondering about it.

To be clear, I love ts and fancy generics that try to impose a Haskell-like determinacy on js Object structure with exclusivity and exception guarantees and all the rest; I just also hate it/them, at the same time.

58. IggleSniggle ◴[] No.41862925{5}[source]
Just to work this out together a little more in discussion form, since I appreciate your attitude:

Consider these two scenarios:

- read "test1-config.json" from disk, into whatever most easy JSON-adjacent format makes sense for your lang

- just use the JSON-adjacent format directly

Isn't the difference between these that one requires coupling input configuration of the environment to the tests (possibly inclusive of env vars and OS concerns around file I/o), making running the tests more confusing/complicated in aggregate, while the other requires coupling input configuration to the tests, making the unit under test clearer but potentially less reflective of the overall system?

Effectively this is just an argument between integration tests and unit tests. Unit testers certainly have the rhetorical upper hand here, but I think the grug-brained developers among us feel that "the whole program should be a pure function."

That can ultimately be reduced to a P-NP problem.

replies(1): >>41863338 #
59. miningape ◴[] No.41863338{6}[source]
yeah - I don't think we should go so far as to write a config file for a test. But if we have something that is already readily convertible to/from json, it should be used. Not seeing it so much as a config for a test but as an argument we're storing in a separate file.

For example if we had a dto that serialises to/from json we should be storing json not creating this dto manually - I would push it further and say any structure which is also easily/transformed from json, like extracting a certain property and using that in the test (although this is also context dependant, for example: if there are other tests using this same file). As a counter example I wouldn't advocate for using json config files to test something completely unrelated to an underlying json structure.

> That can ultimately be reduced to a P-NP problem

Yeah ideally the goal should be to write the simplest code possible, however we get there - shoehorning an approach is always going to add complexity. I think there's a lot of danger from taking rhetoric too far, sometimes we push an abstraction to its limits, when what's really required is a new perspective that works well at these limits.

Effectively I think there's a range in which any argument is applicable, its a matter of assessing if the range is large enough, the rules simple enough, and it solves the actual problem at hand.

replies(1): >>41864443 #
60. PyWoody ◴[] No.41863473[source]
I'm sure you've seen this, but "The Simpsons Hit & Run Source Code Comments, Read by Comic Book Guy" (https://www.youtube.com/watch?v=R_b2B5tKBUM) is an all-timer.
61. throwaway365x2 ◴[] No.41863712{4}[source]
In that case you have more than one implementation, or at least a reasonable expectation that it will be used. I don't have a problem with that.

My comment was regarding interfaces used internally within the code, with no expectation of any external use. I wrote from a modern Java perspective, with mockable classes. Apparently interfaces are used by .Net to create mocks in unit tests, which could be a reason to use that approach if that is considered "best practice"

62. qwertycrackers ◴[] No.41864182[source]
Your unit tests should just take the result of loading the file as an argument or other type of injection param. Then you can hardcode your unit test config parameters in the test code itself. That's the appropriate place for this kind of indirection.
63. consteval ◴[] No.41864272[source]
> more baroque and harder to understand

I don't understand how this is the case. If anything, an interface is MUCH easier to understand than a variety of functions strung together.

I mean, this is the whole reason we have APIs. If I'm a consumer, I would much rather read and understand the API and its contract than try to read through the code to find out requirements.

replies(1): >>41871799 #
64. Izkata ◴[] No.41864392{4}[source]
It's an absurdly common mistake though, on the level of hungarian notation being misused and having to split it into two names.

Basically too many unit testing tutorials were simplified too far, so the vast majority of people think a "unit" is syntactic rather than semantic. Like, a single function rather than a single action.

65. Izkata ◴[] No.41864443{7}[source]
> yeah - I don't think we should go so far as to write a config file for a test. But if we have something that is already readily convertible to/from json, it should be used. Not seeing it so much as a config for a test but as an argument we're storing in a separate file.

This sounds like a great candidate for the facade pattern: https://en.m.wikipedia.org/wiki/Facade_pattern

Basically you hide dependencies behind a small interface, which lets you swap out implementations more easily. The facade is also part of your codebase rather than an external API, so it gives you something stable to mock. Rather than a building facade like the name is based on, I think of these as a stable foundation of things a module calls out to. Like your code is a box with an interface on one side (what tests and the rest of the codebase interact with) and the facade(s) are on the other side (dependencies/mocks of dependencies).

replies(1): >>41871794 #
66. abrookewood ◴[] No.41864692{3}[source]
OK that makes sense. As an outsider, the C# code bases I look at seem to do this as standard practice, even if the requirement for different classes never materialises. I guess you get used to looking at it, but it seems (perhaps naively) as wasteful and a bit distracting.
replies(1): >>41864911 #
67. neonsunset ◴[] No.41864911{4}[source]
> it seems (perhaps naively) as wasteful and a bit distracting.

It is.

Nowadays, .NET is usually able to do away with the abstraction cost of such interface abuse luckily, but it remains an additional item you mentally have to deal with, which isn't good.

Single-implementation interfaces are still considered an anti-pattern, and teams that over-abstract and mock everything out when writing unit tests usually just waste time in pursuit of multi-decade old cargo cult. These also often test that modules comprise of specific implementations and not whether they simply satisfy the interface contract, which is terrible. And often turn stateless parts of logic that could have lived on some static class into an interface and an implementation injected with DI, that is then mocked out, instead of just calling methods on a class. More difficult to remove, worse locality of behavior, does not answer the question "if tests are green, are we confident this will work in prod?", sadness all around.

I agree with your sentiment. It's much more practical to write functional and component-level tests with coarser granularity of individual test items, but with more extensive coverage of component inputs. There's a wealth of choices for doing this with little effort (e.g. testcontainers).

68. IggleSniggle ◴[] No.41871794{8}[source]
I have seen over-reliance on the facade pattern devolve into endless indirection that can make the code needlessly confusing. If you are already familiar with the codebase, it doesn't seem like a big deal, but when you onboard, you'll find your new teammate combing through file after file after file just to discover, "oh, there's never any external API call or specific business logic involved, we were just reading a static json file from disk that does not change its content during a single run."

Using the known baked in stdlib functions for standard behavior removes a lot of potential uncertainty from your codebase (while also making it sometimes harder to test).

69. aidenn0 ◴[] No.41871799{3}[source]
We are talking about a single function that possibly takes zero arguments versus an interface (TFA doesn't seem to show the code, but the interface presumably exists for DI).

I have waded through such code in Java, rather than C#. At least some of it is fighting the language; Java is pretty hostile to writing DI style code.

On top of that, even in languages that are more DI friendly, DI significantly decreases the lexical locality of code that is dynamically very close.

replies(1): >>41882592 #
70. consteval ◴[] No.41882592{4}[source]
I don't think that you necessarily need lexical locality, rather you need specification. And interfaces are just an easy way to make a specification. The problem with "too much" lexical locality is that now you have to search DEEP into stacks to figure out what really going on.

The whole point of specifications is trying to extract the most requirements in the least amount of time. With more "top level" interfaces and tools like DI you can do that, but certainly you can take it too far. A single function, I'd say, is way too far.