It’s Friday, and I’m writer’s blocking trying to write up something persuasive defending the concept of Immutable Infrastructure in DevOps. Really, the client in question has at best Semi-Immutable Infrastructure: updates can be applied via the deployer for the systems that are now working under that paradigm. They don’t tear down the and rebuild their VMs for patches and security updates. And they still have a lot of stuff that works under the traditional “mutable” model.
But we are pushing the idea of eventually having all IT infrastructure adopt DevOps principles. For major OS or software releases, or in the event of a security breach or other problems with the system, tearing the system down and rebuilding it should be done. I approach DevOps from a traditional IT background. Most of my colleagues approach it from developer backgrounds. I’ve found that traditional IT folks are more resistant to DevOps principles than people with developer backgrounds. So I feel like when IT people exhibit skepticism, having “come to Jesus” myself from the same heresy, I should be the one to preach to them the DevOps gospel.
I’m curious if anyone out there has worked in this area. My current focus is doing a large, legacy database system as Immutable Infrastructure. Or at least Semi-Immutable Infrastructure if you’re a real purist (which I am not).
I am in the process of converting our infrastructure to an immutable model from the semi-mutable model we have. We don’t mind mutability; we just want to be able to track how it changed, who did and make sure the change is consistent everywhere.
I have a developer background and I am one leading the charge into the automation/DevOps side against traditional administrators.
I can offer advice of my experiences if you want to ping me privately. Too long to offer up in a comment. As a short aside: Demonstrate the value of immutability and consistency to those on the fence via automation. Lead from the front and don’t talk theory. Do first. Much harder to argue against when you have results and code to point to than ideas.
That seems to be a reasonable approach, at least if you don’t have any servers, and maybe even if you *do* provision your own servers (particularly if you have a lot of them). It’s my understanding that it’s cheaper to provision your own servers than it is to rent virtual servers from the cloud, with the added bonus and curse that you have complete control over them.
Of course, having your servers is ideal when you have predictable traffic as well. If there’s variation, then having the ability to use virtual servers is very helpful — they can grow and shrink as you need them to. (And this can be done even if you also have your own servers.)
Come to think of it, from the developer side, the biggest pain that comes from updating servers isn’t that it introduces variability in the servers (which is a potential problem) — it’s whether the updates are going to introduce bugs into your code base. As a developer, I’m not sure if I see how this prevents that from happening — but I see how, once you’re comfortable that your code base will work with the updates, it’s easier to make the updates….
Bah, I apologize for the rambling. I’m a developer, not a DevOps person, so my observations come from the outside, looking in!
I have another wrench to throw into the gears. I’m currently on a “quest” to help develop a package management system for a new computer language and blockchain platform; thus, we’re trying to figure out how to make a nice package management system for both language libraries and for apps.
One package management system I’m looking closely at is a Linux distribution called Nix OS. It’s an attempt to create an immutable package management system: it uses hashes to identify individual packages, so that different packages can have different versions of the same dependencies.
It’s my understanding that under this system, if you upgrade to something new, the old stuff remains, unless you do some sort of garbage collection — and thus, the system is simultaneously upgradeable and immutable.
Unfortunately, the OS itself is a challenge to install, because the distro is in its infancy; apparently, though, it’s even possible to use the Nix installer on already established systems like Debian, even if the majority of the distribution is maintained by “apt”.
*shakes jowls while bah-humbugging* Since when is developer != traditional IT? I think you’re saying you approach from a system administration background not a developer background. To quote Vizzini, “it’s a prestigious line of work, with a long and glorious tradition.â€
Well I can’t help you since I’m opposed to immutable infrastructure. haha
We do a lot of DevOps things there, but I find tearing down servers is overhead without much benefit. Sure, upgrading from Ubuntu 16 to 18 should involve a wipe and install, but just for security patches? Just for a deployment? That just seems ripe for problems and a waste of time.
Config Management, like Ansible, was made to handle the in-between. That’s what we use there, and that works great.
I’m not a purist. Our approach is semi-immutable. Security patches and updates get applied in a controlled manner by the deployer. We only tear down and rebuild for major releases.
Suppose your server held a terabyte of database, would you wipe and reload this? Disposable server images match servers doing toy-ish disposable jobs. You need to be able to verify the server contents while its running. Was the old image wrong somehow? You don’t know because you never examine or track it. Wipe and reload throws away the data you need to debug.
Amazing! Its in fact amazing paragraph, I have got much clear idea on the topic of from this piece of writing.