←back to thread

67 points naison | 6 comments | | HN request time: 0.001s | source | bottom
Show context
remram ◴[] No.43116704[source]
I don't really understand the point of this. I have a production cluster and I develop locally. My dev environment is not connected to the production cluster. That seems super dangerous.

My dev environment has the database with mock data, the backend, etc all running there. I would never connect to the production cluster. I don't need to VPN into another cluster to run locally.

Even if I have a dev cluster/namespace, then I will run the code I'm currently developing there. That's the point of a dev cluster. (Tilt for example can do both: a local cluster (minikube/k3d/...) or a remote test cluster)

I don't understand in what situation you have an app that needs to partially run in the cluster and needs to partially run on your machine.

replies(2): >>43116864 #>>43116868 #
kube-system ◴[] No.43116864[source]
> the database

> the backend

Many k8s clusters are quite a bit more complicated than a single database and single backend. Some have dozens or hundreds of deployments. Even if you have the horsepower to run large deployments on your system, you will have problems wiring it all up to match production.

replies(2): >>43116932 #>>43117848 #
1. remram ◴[] No.43116932[source]
And you can't develop one of those "deployments" without all the other ones being present?

That just seems like super-terrible engineering, but I can believe it. Gotta be "web-scale".

replies(2): >>43117089 #>>43122197 #
2. kube-system ◴[] No.43117089[source]
That's quite a big conclusion to draw from my statement. Whether or not it is good engineering or not really depends on the problem you're solving, your team structure, your integration footprint, etc. Not everything is a custom CRUD app.
replies(1): >>43119362 #
3. cassianoleal ◴[] No.43119362[source]
> the problem you're solving, your team structure, your integration footprint

Those are factors to be accounted for in good engineering.

replies(1): >>43120720 #
4. kube-system ◴[] No.43120720{3}[source]
Engineering is dictated by the problems you are given to solve, not the other way around. There are many valid situations in which well engineered solutions will have a developer running an entire kubernetes cluster.

For a dead simple example: You may want to do this if the team is small enough that your developer is also your QA engineer... because you can't test the entire application if you're not running the entire application. There are likely hundreds of other reasons too.

To suggest that "any technology with >3 interconnected components is bad engineering" is naive.

replies(1): >>43133100 #
5. cherry_tree ◴[] No.43122197[source]
I think you are talking past each other about different stages of development.

At early stages you are writing some code and tests within a single component, here you are iterating with a single binary/container. At some stage a change may involve multiple components.

Once you are satisfied with your code changes you would want to run those components in an environment that simulates how they communicate normally.

In kubernetes this may mean you need your cluster and its networking components which may need configuration changes tested as part of your new feature, you may have introduced new business metrics which you want to verify are collected and shipped to your desired metrics aggregator so that you can build and expose dashboards, you may want to create new alerts from these metrics and verify that the new alerts trigger as expected, etc

You can see how you may need to run many components in order to test a change in only one. I don’t think this is bad engineering, and I don’t think it’s specific to kubernetes or “web-scale”.

6. remram ◴[] No.43133100{4}[source]
You shifted the goalpost from "needs a cluster to run" to "3 components" just so you could call us naive?