Most active commenters

    ←back to thread

    128 points darthShadow | 11 comments | | HN request time: 0.009s | source | bottom
    1. ilaksh ◴[] No.42812516[source]
    I think they have a valid complaint about that open source program Docker is running and lack of response, but the overall tone seems like they are scolding Docker for not giving away it's services for free.

    I have always felt that was strange how quickly people started taking Docker for granted, while simultaneously relying on them completely but also somehow dismissing their core utility as a trivial and unsophisticated layer or something.

    It's like they never really got credit from most people on HN or are worthy of getting paid, even though most everyone uses their technology.

    replies(4): >>42812820 #>>42812835 #>>42812898 #>>42812940 #
    2. bayindirh ◴[] No.42812820[source]
    FWIW, I have a personal Docker license, but I avoid containers where I can (because containerizing everything by default has its own set of problems). I use containers as "very fat, stateless" binaries which are run when I need to do something (generate a webpage, take backups, etc.).

    People got Docker for granted because startups and modern sysadmins absolutely despised installing software on physical or VM servers. On tech side, Vagrant was making VMs easier, plus BSD had jails, and Linux needed something similar. So they found a legit gap in the stack, and timed it well.

    Who wants to spend 3 hours to install a service while they can make it appear out of thin air in 40 seconds and deal with the shortcomings and consequences later, or containerize an application, disregard hard requirements and tell "just add an X container in front" (I'm not telling that this is good, BTW).

    So Docker spread like wildfire and graduated to invisible/boring tech in 3 months straight. Then when the people demanded money from developers for what they built for them, people grabbed the forks, or created literal forks of the software. I support the latter approach, not the former one.

    However, if they advertise a DSOS program, they should do what it entails. Be transparent, fair and open about it.

    replies(2): >>42813360 #>>42813850 #
    3. eproxus ◴[] No.42812835[source]
    But Docker said they would give away their services for free to all that meet the DSOS requirements. They did so in the past for this very organization and suddenly pulled the rug and went into radio silence.

    The way I see it, Docker can’t both have their cake and eat it. They can’t both get the nice PR and goodwill of claiming to provide free access to open source, and also not do it (and require them to pay to keep using it in the existing capacity).

    Fine if they don’t want to provide a free service, but then they shouldn’t be able to claim to do so either.

    replies(1): >>42812966 #
    4. skywhopper ◴[] No.42812898[source]
    Nah, this is a bad take. There’s no excuse for them to be unresponsive to active users. Even from a purely profit-focused point of view, if Docker doesn’t want to give away free stuff, they should be encouraging/begging/cajoling users like this to convert to a paid plan. But they’re just ignoring them instead?
    5. thunky ◴[] No.42812940[source]
    > but the overall tone seems like they are scolding Docker

    I didn't find the article to be scolding or offensive in their tone. It's just a straight reporting of their experience and (imho valid) concerns.

    6. exsomet ◴[] No.42812966[source]
    But they did do it. By their own admission in the post, that isn’t really in question.

    The implied question is whether or not they should _continue_ to do it in perpetuity. If docker did a cost:benefit of the program and decided it wasn’t worth it (maybe they didn’t get that much good PR after all?) it’s their prerogative to end it.

    There’s a perfectly valid gripe about the lack of communication, just as a matter of courtesy; but again, taking from their very own post, docker (the company) has historically burned their hands on proactive communication before.

    replies(2): >>42813045 #>>42814400 #
    7. Ekaros ◴[] No.42813045{3}[source]
    Sometimes I might agree on possible take that no PR is better than bad PR or any PR. Just quietly dropping whole thing could be least bad publicity.
    8. curt15 ◴[] No.42813360[source]
    Containers took off because it was the easiest way for developers targeting Linux to get a predictable runtime environment. It freed them having to worry about the differences between Debian's OpenSSL or Red Hat's OpenSSL libraries or even the differences between different versions of a distribution. You don't see nearly the same level of uptake among Windows developers because not only is there only one Windows API for everyone to target but also Microsoft is willing to bend over backwards to preserve backward compatibility.

    Containers also predated "modern sysadmins"; prior to docker, Google ran its prod software in chroots for the same reasons as above:

    >The software run by the server is typically run in a chroot with a limited view of the root partition, allowing the application to be hermetic and protected from root filesystem changes. We also have support for multiple libcs and use static linking for most library uses. This combination makes it easy to have hundreds of different apps with their own dependencies that change at their own pace without breaking if the OS that boots the machine changes.

    https://www.usenix.org/system/files/conference/lisa13/lisa13...

    replies(1): >>42814837 #
    9. ahoef ◴[] No.42813850[source]
    I started using it to get rid of all the moving parts of library versions, moving Debian releases, etc. Everyone has the exact environment locally and there is no confusion.

    It has its own flaws, but it was so much better than the alternatives.

    10. saurik ◴[] No.42814400{3}[source]
    The problem is that people believe such promises in the first place :/... if someone builds a fully-centralized ecosystem that has a network effect benefit of any kind, it would be dumb to believe they are going to do it forever without it becoming horrible, as eventually the system will become valuable enough that the people who control it will realize a tipping point has been reached that allows them to play the good old "I have altered the deal: pray I do not alter it further" card on the user community without enough ramifications.

    And yet, people fall for this over and over and over again, as the centralized system tends to be slightly easier to use or slightly cheaper (but only due to subsidies) or comes to fruition slightly faster than a decentralized protocol or even a centralized system run by a non-profit could (though the latter still failed to save us from OpenAI... ;P but like, imagine if Docker Hub were pledged to and run by the battle-hardened bureaucratic non-profit, such as the Apache Foundation, with a long track record of not extracting value from this sort of situation).

    > All of this has made us seriously reconsider what we do going forwards; we obviously won't pull all our images off Docker Hub, nor is it sensible to just stop pushing new images as it will seriously impact the many users we have who pull from there...

    When you hand someone else control over how to find your content -- using central registries or walled gardens, both of which always now insist on controlling the URL of your content, you've given away all of your negotiating power for when the deal is eventually altered. It should be obvious before you ever get into this situation that, one day, you will get screwed; and like, for a service where it is clearly more expensive to host it per user than anyone would ever pay for it, there is absolutely no possibility that the situation is going to continue forever without careful planning and attention to monetization.

    Nothing ever has to be built this way, BTW. I developed Cydia, the alternative to the App Store used on jailbroken iOS devices, and I explicitly did not host software myself: I set up a federated ecosystem based on APT/dpkg where people had the option to self-host their software or could work with larger (ad-supported) repositories (which I refused to run), and (and this is key) there was a seamless hand-off if you later migrated between repositories and you could even be hosted by multiple at the same time. To do this, though, you have to go in being a bit humble and, explicitly, not only reject dreams that one day you'll own the ecosystem, but work every day to prevent yourself from having that kind of unilateral power in the future.

    Imagine if GitHub or Facebook Pages actually worked like a Web 1.0 web hosting company (which, by and large, they are, only with single sign-on for comments/reactions and an algorithmically-sorted central feed aggregator): you would expect to be able to buy a domain name and configure a CNAME for your account, and, suddenly, the service loses much (not all!) of its power to later move on to the extraction phase... of course, services never want to do this, and users who like being "the reason why we can't have nice things" will even argue that the most minuscule of downsides such decentralized (distributed or federated or democratic or merely regulated) systems might have are unacceptable and we should go all in on the centralized ecosystem.

    https://youtu.be/vsazo-Gs7ms

    ^ A talk I gave in 2017 at Mozilla Privacy Lab on the numerous failure modes of centralized ecosystems, chock full of concrete cited examples -- every slide after the title slide, including even my "who am I?" and "any questions?" slides, is a screenshot of a news article from a reasonable source -- of the myriad situations people like to claim somehow won't ever happen --or at least wouldn't happen this time, as somehow this time is different than all the previous times-- actually happening :(. And like, if I were to do it again today, I would just have even more examples :(.

    11. mikepurvis ◴[] No.42814837{3}[source]
    Some have argued that the rise of containers correlates with the rise of Python, explaining that containers are particularly well suited to packaging up the dumpster fire that any moderately-complicated Python app quickly becomes.

    Of course now we have Rust and Go, but being able to shove your statically-compiled binary into a tiny scratch container and have it cooperate with orchestration systems is still a pretty nice abstraction— just harder to say if it would have been worth it had we not had Django apps needing to be made deployable first.