Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>>> We no longer provision load balancers, or configure DNS; we simply describe the resources that we need, and Kubernetes makes it happen.

This is (part) of what keeps me in the stone age. You are provisioning load balancers and DNS - but just one step removed through k8s

And my prior is that we need to understand that, be aaare of it, have a model of what is going on to help develop and debug.

And so it feels a bit like "magic abstraction". And then to peek through the abstraction you suddenly not only need to know about DNS and which machine is running bind, but also how kubernetes internally stores it's DNS config and how it spits that out and what version they changed it with.

In other words you have to become expect in two things to debug it.

And maybe it's worth it - but I struggle to see why it's not simpler to keep my install scripts going.

(OK I guess I am writing my own answer - but surely the point is what is the simplest level of thin install scripts needed to deploy containers?)



I’m seeing things as you are seeing them currently.

I love the idea of using Kubernetes, it sounds amazing initially, but then every single article I read about it turns into some epic blog post that leaves me worried that the whole house of cards could easily come crashing down.

Maybe in the future the abstraction will become rock solid, easy to install and manage and ‘just work’, but it doesn’t feel like that to me now. There’s too many ‘we had to come up with / use hack X to integrate it with software Y’.

Until then, if you are on a small team with a small budget, I reckon keeping it simple is the better approach. Standard OSs with some bash scripts for provisioning, build and deploy. Even if it’s more manual work, and takes a bit longer, having an understanding of the platform you are building on is crucial.

BTW if you are looking for a description of a way todo this sort of thing ‘the boring way’:

Robust NodeJS Deployment Architecture

https://blog.markjgsmith.com/2020/11/13/robust-nodejs-deploy...


K8S is an API that the majority is agreeing on, which is rare. There is a lot of amazing tooling, a staggering amount of ongoing innovation, all built on solid concepts: declarative models, emitted metrics (the /proc equivalent, but with larger scope) and versioned infrastructure as data (a.k.a. GitOps).

For someone that is known as the King of Bash (self-proclaimed) - https://speakerdeck.com/gerhardlazu/how-to-write-good-bash-c... - and after a decade of Puppet, Chef, Ansible and oh wow that sweet bash https://github.com/gerhard/deliver - even if all my workstations and work servers (yup, all running k3s) are provisioned with Make (bash++), I still think that K8S is the better approach to running production infrastructure. The advantage to using simple and well-defined components (e.g. external-dns, ingress-nginx, prometheus-operator etc.) that adhere to a universal API, and are maintained by many smart people all around the world, is a better proposition than scripting in my opinion.

At the end of the day, I'm in it for the shared mindset, great conversations and a genuine desire to do better, which I have not seen before K8S & the wider CNCF. I will go on a limb here and assume that I love scripting just as much as you do, but go beyond this aspect and you will discover that it's more to it than "thin install scripts that deploy containers" (which are not just glorified jails or unikernels).


I thi k you've hit your head on the nail - the point is not just the kubernetes, it's that you can build standard infrastructure on top. Any software can be (in theory) setup with a helm script, configured in a standard way through YAML configmaps rather than some esoteric configfiles or scripts which are diffetent for every piece of software


By using K8s and similar technologies, you're buying standardization with underlying complexity and reduced efficiency.

In many cases, it's a good tradeoff, because you can now use standard tooling on everything.

Just like it's cheaper to ship an entire (physical) shipping container that's half-full than to ship the same stuff loosely. Or why companies will send you two separate letters on the same day with a small note that this is more efficient for them than collating them.

I assume that k8s also makes it much easier to move to a different cloud provider if you're unhappy with one (or the new one offers better pricing). Instead of rewriting your bespoke scripts that only you understand, anyone familiar with the technology will know which modules to swap to make it work with the new provider.


> And my prior is that we need to understand that, be aaare of it, have a model of what is going on to help develop and debug.

If it's a sufficiently robust abstraction, you don't, you just learn the abstraction. Kubernetes has reached that point for many folks.

I no longer have a detailed mental model of how my compiler or LLVM works, I just trust that it does. When was the last time you needed to (or were capable of) debugging a bug in your compiler? A couple of human generations of work went into making that happen.

Note that it turns out compiling code well, or making a reliable orchestration system, is an enormously complex problem. At some point, the complexity outstrips the ability of even generalists in the field to keep up, yet the systems keep getting more reliable.

So in these types of cases, you can either do it yourself poorly (you're an amateur), do it yourself well (congrats, you've become an expert), or delegate.

This isn't really limited to computing. I delegate maintenance on my car to a mechanic, while I'm pretty sure a generation ago, everybody (in the US) changed their own oil and understood how the carb worked. Times change.


The car issue is tempting as an argument clincher. My problem is the army - a lot of their trucks and so forth are not computer controlled this-generation-of-toyota-abstraction but have stayed inefficient but repairable trucks. Because they both need to have trucks that are repairable and also all the computer understands is nice paved roads. Which is what armies don't do.

But yes, mostly I am not an army.


computer being tuned for nice paved roads is not really the problem. If the army needed they have the money to put into developing a car computer specialized in whatever terrain they needed to.

Their problem is being easy to repair.

Most of the regular people can take their cars to a shop that will have the really basic in stock (ie. oils and filters) and can order most of the rest from a distributor to have delivered same or next day for most cars. You will only take longer to get specialized parts or parts for unusual cars. Also most people can work around not having a car for a few days even if it is a hassle.

While army base does have specialized mechanics to properly fix the trucks, if one break in the middle of a mission after engagement the team in that truck need to be able to patch it up and get it going on the field.

So the army need trucks that people that are primarily soldiers and not mechanics can patch up "easily" in the field without tons of specialized training, without access to special or weird tools and minimal to no access to parts.


i think the idea is that k8s does away with having to glue all those pieces of infra together, not that you lose understanding of how it all works together. part of the headache with managing infra is that it rots over time... things come and go (sysv to systemd, apt/snap/whatever, config files change, things break). it's easier to keep up to date on k8s than all the disparate parts of the OS and provider-specific APIs and whatnot


That's an interesting perspective I have not heard before.

Does this imply there is a cloud abstract layer that should come (assuming all providers can put aside commercial interests etc)

And is k8s the simplest possible abstraction? And if not - what is?


> Does this imply there is a cloud abstract layer that should come

crossplane.io comes closest afaik

> And is k8s the simplest possible abstraction? And if not - what is?

If you are asking about the simplest possible abstraction for container scheduling and orchestration, then I believe Nomad from HashiCorp or Docker Swarm are simpler. As for managed solutions with wide adoption in all types of environments and the largest investment to date, I am not aware of anything on par with K8S.


In the happy path, your developers no longer need to worry about this stuff. It’s possible for a team to stand up a new service and plumb it through all the way to the external LB just using k8s yaml templates.

In the unhappy path, sure, you need someone who knows how to debug networking issues, and in some cases it’s going to be harder to debug because of the layers of indirection. But the total amount of toil is significantly reduced.

A bad abstraction doesn’t carry its weight in complexity. A good abstraction allows you to ignore the lower levels most of the time without missing something important; I’d put k8s firmly in the latter category.


I agree and in my experience with k8s, the unhappy path is a rare experience. Everything just works most of the time


I have switched from dev to cloud engineer, and from what I have experienced all abstractions leak and you need to master all layers.


Let's turn the knobs on the scenario and see how it appears:

>>> >>> with the advent of the first programming languages you no longer had to think in terms of registers, loading operands from memory, storing the results back, spilling registers to the stack.

>>> You _are_ handling registers and memory spills, but just one step removed through the use of C.

The analogy may not be perfect, but I think it makes obvious some of the things also mentioned in sibling comments: it's all about habit, maturity and thus trust.

If you trust your tools are working correctly, if you know how to deal with their well known quirks, you'll just rebase on top a layer and hopefully boost your productivity and tackle more complex problems more easily.

Maturity is important because if today you're more likely to blame yourself than the compiler when your stuff doesn't works, it's just because you're lucky to work with mature and popular toolchains. (And I'm not only talking about the past when compilers where new and unproven, it still happens today on some niche embedded toolchains).

So, yes, it's indeed rational to be wary of new unproven abstraction layers as they could bring more pain than help.

It's hard to judge when that line is crossed though.

I personally like to know how stuff works under the hood anyways. I find it useful in practice and it gives me confidence in using the higher layers, when they make sense, or stay with the lower level layers, when they make sense.

Occasionally I still write some assembly. But most of the time, for most of the stuff, it just makes more sense to use a higher level programming language.

I see k8s in a similar way. We have operating systems, programming languages, etc; all sorts of abstractions that help us separate concerns and have specialists dealing with the nitty gritty details of some stuff, so that everybody else can be specialized in something else (just like in real life)


But i think this is apples and oranges - C maps pointers and memory allocation in a very direct robust manner, even Python allows you to dis back to see the stack.

But this hard-line-to-underlying reality is unlikely to exist looking at how this months k8s will configure last years AWS route53.

It just works 80% of the time is a disaster, 98% of the time might be bearable. is it above 99%?


Indeed, my point was that it's a quantitative question not a qualitative one.

When compilers created buggy code every other day, when memory allocators were unreliable because memory fragmentation would make it likely for new allocations to fail in the lifetime of a normal program execution, etc, it would be, as you said "a disaster if it works <95%" of the times.

Did k8s reach 99%? The jury is still out. Probably not yet, but in principle I don't see anything wrong pursuing that path just because it's "another layer". We use abstraction layers all the time; they allowed progress (Yes often we went too far)


Isn’t this true of all operating systems? Debugging is going to require some knowledge about the OS and the syscalls it makes but you don’t want to write directly to the machine as an OS would. Same goes for internet connected clustered machines




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: