I failed to use IP tables for years. I bought books. I copied recipes from blog posts. Nothing made sense, everything I did was brittle. Until I finally found a schematic showing the flowchart of a packet through the kernel, which gives the exact order that each rule chain is applied, and where some of the sysctl values are enforced. All of a sudden, I could write rules that did exactly what I wanted, or intelligently choose between rules that have equivalent behaviors in isolation but which could have different performance implications.
After studying the schematic, every would just work on the first try. A good schematic makes a world of difference!
Shout out to the brilliant and generous work of the author!
It's been a few years for me tho, so perhaps it's covered with the VM section.
Lovely diagram, thanks for sharing it!
https://www.frozentux.net/iptables-tutorial/images/tables_tr...
I couldn’t find one that annotated where sysctl configurable were shown. But this is a useful annotation, even if it’s an exercise for the reader.
https://wiki.nftables.org/wiki-nftables/index.php/Netfilter_...
And at http://www.easyfwgen.morizot.net/ there's an old, but still useful generator for an iptables setup. That should help to understand iptables.
If you know how iptables maps to that diagram you are very likely to be able to quickly understand how nftables does too.
The same thing could sometimes fall into different categories as well - like going for a microservices architecture when you need to serve about 10'000 clients in total vs at some point actually needing to serve a lot of concurrent requests at any given time.
I'd phrase it to reasonable taken trade-offs for customer/user support and/or selling products.
> going for a microservices architecture when you need to serve about 10'000 clients
So far I am only aware of the use case to ship/release fast at cost of technical debt (non-synchronized master) of microservices. As I understand it, this is to a large degree due to git shortcomings and no more efficient/scalable replacement solution being in sight. Or can you explain further use cases with technical necessity?
you could try do a first pass in an AI model to translate and then proof-read it for quicker translation. good luck, it would be fun and potentially impactful ;)
Doesn't even go into iptables/nftables
Soooo many systems are still using iptables even though we "should" be using nft everywhere.
If you're going to be a Linux Sys/Net Admin today, you need an understanding of both systems.
And the answer is it doesn't do NAT. The code is already preparing to do NAT, and that code merely consults the table to find out what kind of NAT it should do. The diagram makes it look like you can just move a NAT rule to a filter or mangle rule because the kernel just applies these tables in sequence anyway, but you can't because they are consulted by different blocks of code for different purposes.
Me too, then I discovered FreeBSD and pf tables. I _feel_ like an expert network engineer now. It took time and effort of course, but the learning process "clicked" for me all along the way and I was able to build on my understandings. Give it a try!
https://docs.freebsd.org/en/books/handbook/firewalls/
There was a recent book published on the tool, The Book of PF, 4th Edition
2. Sometimes when you have vastly different workloads, like a bunch of parts of the system (maybe even the majority) that is just a CRUD and a small part that needs to deal with message queues, or digitally sign documents, or generate reports or do any PDF/Word/Excel/whatever processing. You could do this with a modular monolith but sometimes it's better to keep the dependencies of your project clean, especially in the case of subpar libraries that you have to use, so you at least can put all of the bullshit in one place. Also applies in cases where the load/stability of that one eccentric part is under question.
3. The tech stack might also differ a whole bunch of some of those use cases, for example if you need to process all sorts of binary data formats (e.g. satellite data) or do specific kinds of number crunching, or interact with LLMs, a lot of the time Python will be a really good solution for this, while that's not what you might be using in your stack throughout. So you might pick the right tool for the job and keep it as a separate service.
4. The good old org chart, sometimes you'll just have different teams with different approaches and will be working on different parts of the business domain - you already said that, but Conway's law is very much a thing that'd be silly to fight against all that much, because then you'd end up with an awkward monorepo, the tooling to make working which easy might just not be available to you.
His book "Operativni sustavi i računalne mreže - Linux u primjeni" [0] (Operating systems and computer networks - Linux in use) may well make learning Croatian worth it! Congrats on publishing, and thanks for such an invaluable contribution!
To your C++03 analogy, I wouldn’t recommend learning C++03, but I also wouldn’t recommend solely learning C++23 either. C++20 and 23 have some really cool stuff in them that can definitely make your code cleaner, but there’s a lot of codebases that are stuck on older versions (at $JOB one of our target platforms is stuck on C++17 and will never get an upgrade so we can’t move the codebase forward until we abandon that kit).
I for instance have never really used iptables in anger, but have lots of experience with nftables and pf. I’ve used both in a professional setting. People can be made aware of iptables, but unless there’s a need to know it, I wouldn’t recommend picking it up now. And you’ll know if you need to learn c++17 or iptables, or python 2.7.
That makes no sense. Just because I do not know X, it does not necessarily follow that I am not required to know it, not at all. I might need it for my job, or my future job. I might need it for a Linux distribution I just installed, and so forth. Or perhaps I am already using iptables, but I do not know it.