Destroy and Deploy: the Joys of Immutability

Destroy and Deploy: the Joys of Immutability

October 11, 2025·Fernando Duran
Fernando Duran

Brought to you by SadServers  – Hands-On Linux & DevOps; Real Challenges. Real Infra. Real Skills.

Sysadmin Friend: “Hey, look, my application server in production has an uptime of over a thousand days! Talk about stability eh?”

Me: “Well, that’s actually not a good thing.”

Sysadmin Friend: “What!?”

Me: “Sorry, let me tell you a grandpa story.”

In the old, pre-cloud days, we would make changes in our server (upgrades, application deployments) half-manually, typically running our own custom Bash scripts in big servers named after Star Wars characters. These were our “pets” or “snowflakes” because they were hard to reproduce from scratch and we didn’t want to touch them too much, afraid they could die from a light breeze. If the sysadmin tending to the pet left the company, and as usual there was no good documentation, then, well, there would be a world of pain.

Then “the cloud” was invented and on one hand you could spin up servers quickly and easily and on the other hand they could die without notice (“what do you mean ephemeral disks?”). Also the collaboration between developers and operations (or rather doing sysadmin the developer way) was coined “DevOps” and that concept has been paying our salaries for a while.

A fundamental DevOps concept is “Infrastructure as Code”, where you declare in code how you want your infrastructure to be. Configuration Management tools like Ansible, Chef, Puppet or SaltStack are now used widely to perform changes in-place to our servers. This is a good thing but we can do better in general.

Upgrades in place can fail for any reason (e.g. a flaky network or Linux deciding to do an unattended upgrade blocking package installations) and as a result have partially upgraded servers or servers in an unknown final state. This is a problem even if it only happens a small percentage of the time (with a big fleet this will be more pronounced). We want atomic “all or nothing” operations; it adds predictability.

Instead of making changes in-place, our infrastructure can be composed of immutable components where the whole component is replaced. We can create images of our servers (using Packer to produce AWS AMIs for example) for any change and deploy them while destroying the previous ones. Sometimes these re-created servers are called “Phoenix” servers.

Server immutability has several advantages:

  • Well-known state of servers at all times, with simple versioned history.
  • Less deployment failures.
  • Easy and fast deployments with lower downtime; simply spin up servers from the image. This also implies faster horizontal scaling out.
  • Easier consistent deployments to different environments: the same image can be used for production and testing environments. This helps a lot with testing.
  • Possibility of different deployment strategies like blue-green or canary deployments: during upgrade, incoming requests are routed from the older image version to the new one, taking advantage of the fact that there are no servers in an intermediate state.
  • Easy roll-backs to a previous version of an image.
  • No configuration drift problems in running servers.
  • Better security, since the configuration of the server is better understood and servers are more often destroyed and recreated.
  • Destroying and redeploying also uncovers issues where a reboot would fail like misconfigured initialization or some configuration living in memory only and not saved to disk. A frequent fully destroy-and-deploy procedure in test environments will find issues early.

As main drawbacks we have:

  • Need to recreate the image for any (application) change, including small ones. Note that for configuration changes or setting secrets, they can be pulled with cloud-init at spin time.
  • Full cycle of change plus deployment can be slower than applying changes unto the running server. Proper automation earlier in your CI/CD pipeline will help here.
  • Service discovery may be needed.

All this killing servers and spinning new ones is all well, but what about permanent data?

Ideally (see “The Twelve-Factor App”) the servers process stateless applications, regarding data:

  • Application logs can be shipped to a central database for storage, aggregation and search, for example using the Fluentd/Elasticsearch/Kibana or Promtail/Loki stacks.
  • Permanent data needs to be external, like a separate volume that is attached or using a database as a service from the cloud vendor.

Sysadmin Friend: “Oh I see. I guess then you like Docker and Terraform?” Me: “Yes, but that’s for another beer night.”