Your Deployment Strategy Shouldn't Live in Your Pipeline
Your Deployment Strategy Shouldn't Live in Your Pipeline
Here's a situation most platform teams have been in.
A team wants to do a canary deployment. So they modify the pipeline. They add a step that deploys to 10% of traffic, waits, checks some metric, then either proceeds or rolls back. It works. Six months later, a different team wants the same thing — but their pipeline is structured differently, so they write their own version. Then someone wants blue/green. Another pipeline modification. Then progressive delivery with automated analysis. Another version, in another pipeline, maintained by another team.
You now have five different implementations of deployment strategies scattered across your CI/CD system. None of them are consistent. All of them are someone's responsibility to maintain. And when something goes wrong mid-deployment, the person on call has to figure out which version of the strategy they're dealing with before they can even start debugging.
This is what happens when deployment strategy is hardcoded into the pipeline instead of being a first-class concern of the platform.
The problem
A deployment pipeline has two distinct jobs that are often collapsed into one.
The first job is delivery: take the artifact, get it to the cluster. The second job is strategy: decide how it gets there — all at once, gradually, with traffic splitting, with automated rollback triggers.
When these two jobs live in the same pipeline, the strategy becomes tightly coupled to the delivery mechanism. Changing the strategy means changing the pipeline. Testing a new approach means modifying infrastructure that other things depend on. And because pipelines are usually owned by individual teams rather than the platform, you end up with strategy logic that's duplicated, inconsistent, and invisible to anyone outside that team.
The deeper problem is that strategy decisions are cross-cutting. They affect reliability, blast radius, rollback time, and developer confidence. They deserve to be made once, implemented well, and available to every team — not reinvented in every pipeline.
Why it hurts
Imagine a deployment goes wrong mid-canary. Traffic is split 20/80 between the new version and the old one. The new version has a bug. You need to roll back.
If the canary logic lives in the pipeline, rollback means re-running the pipeline in reverse — or manually adjusting traffic weights in whatever tool manages them, if you can even find where that is. The pipeline ran its steps and moved on. It's not watching. It's not reacting. It doesn't know the deployment is in a bad state.
Now multiply this by the number of services your platform supports. Each one with its own pipeline, its own strategy implementation, its own rollback procedure. Your platform's deployment reliability is only as good as the worst pipeline in the system.
And when a postmortem asks "why did we deploy 100% of traffic at once to a service that had no rollback plan" — the answer is usually "because the pipeline didn't have that logic and nobody thought to add it."
What the pattern says
The Strategy pattern says: define a family of algorithms, encapsulate each one, and make them interchangeable. The caller selects which strategy to use. The strategy handles the implementation. Neither needs to know the details of the other.
In software, this pattern appears in sorting algorithms, payment processors, compression formats — anywhere you have multiple ways to accomplish the same goal and want to switch between them without changing the code that uses them.
In platform engineering, deployment strategies are exactly this. Rolling update, blue/green, canary, progressive delivery with automated analysis — these are a family of algorithms for getting a new version of software in front of users. They have different risk profiles, different rollback characteristics, different observability requirements. But from the application's perspective, the goal is the same: deploy this version.
The Strategy pattern says that choice should be declarative and interchangeable — not hardcoded into a pipeline.
Where you see this in practice
Argo Rollouts is the clearest implementation of this pattern in the Kubernetes ecosystem. You define a Rollout resource instead of a standard Deployment, and you declare your strategy in the spec:
strategy:
canary:
steps:
- setWeight: 20
- pause: {duration: 10m}
- setWeight: 50
- pause: {duration: 10m}
- setWeight: 100
The application doesn't know what strategy is being used. The pipeline doesn't implement the strategy — it just triggers the rollout. Argo Rollouts is the executor that watches the rollout progress, manages traffic weights, and reacts to what it sees.
Switching from canary to blue/green is a change to the Rollout spec. Not a pipeline rewrite. Not a new set of scripts. A declarative change to the strategy definition.
Flagger takes the same idea and adds automated analysis. It watches metrics from Prometheus, Datadog, or other sources during a canary deployment and makes promotion decisions automatically. The strategy isn't just declared — it's reactive. If error rates spike, the rollout stops. If latency degrades, it rolls back. The strategy responds to real system state rather than waiting for a human to notice.
Feature flags are a strategy pattern applied at the application layer. Tools like LaunchDarkly, Unleash, or Flagsmith let you decouple the deployment of code from the activation of features. You deploy 100% of traffic — but only 10% of users see the new behavior. The strategy is controlled by the platform, not the pipeline.
The common thread: in all of these, the strategy is a separate, interchangeable concern. It's not woven into the delivery mechanism. It can be changed, tested, and evolved independently.
Hardcoding strategy is an antipattern
When deployment strategy lives in the pipeline, you've created a hidden coupling that's easy to miss until it causes a problem.
The pipeline owns delivery and strategy. Changing one risks breaking the other. Testing a new strategy means modifying production infrastructure. Rolling back a failed strategy change means a pipeline change, not a config change. And visibility into what strategy a given service uses requires reading pipeline code — not a platform API or a manifest in Git.
This is the antipattern. Not because pipelines are bad, but because strategy is a platform concern that's been pushed into a delivery concern. The blast radius of a mistake is larger than it needs to be. The iteration speed on strategy improvements is slower than it needs to be.
Separating strategy from delivery — making it declarative, observable, and interchangeable — is one of the higher-leverage improvements a platform team can make to their deployment reliability.
What changes when you think this way
When strategy is a first-class platform concern, a few things shift.
Reliability becomes consistent. Every service that uses the platform gets the same deployment strategy primitives — canary, blue/green, progressive delivery — without each team having to implement them. The platform team implements once. Every team benefits.
Iteration gets faster. When a team wants to try a more conservative canary rollout, they change a few lines in their Rollout spec. They don't open a ticket to the platform team. They don't modify a shared pipeline. They make a declarative change and the platform handles the rest.
Postmortems get cleaner. When a deployment goes wrong, the strategy execution is visible in the platform — Argo Rollouts has an API, a UI, a clear audit trail. The question "what happened during the rollout" has an answer that doesn't require reading pipeline logs.
The goal isn't to make deployment more complex. It's to make the complexity live in the right place — a dedicated, observable, interchangeable strategy layer — rather than scattered across dozens of pipelines that each solve the same problem in a slightly different way.
Does your platform have a consistent deployment strategy story, or is it still living in individual pipelines? Hit reply at blog@parraletz.dev.


