Skip to main content

Command Palette

Search for a command to run...

The Platform Is the Architecture

Updated
6 min read

The Platform Is the Architecture

There's a distinction that comes up a lot in platform engineering conversations, and it's worth making explicit.

There's a difference between a team that operates infrastructure and a team that designs a platform.

Both teams run Kubernetes. Both teams write Terraform. Both teams deal with incidents, manage clusters, and keep things running. From the outside, they look similar. From the inside, they make decisions in fundamentally different ways.

The operating team asks: is this working? The platform team asks: why is this designed this way, and what does that design decision cost us over time?

This series has been about developing the second kind of thinking — and about the vocabulary that makes it possible.


What we covered

Six posts. Six patterns. Each one a different lens on a problem that platform engineers face every day.

Factory and Builder — environment provisioning without a creation model is slow, inconsistent, and doesn't scale. When you centralize creation logic, you get consistency, self-service, and the ability to evolve the implementation without changing the interface.

Facade — complexity leaks when there's no abstraction layer. The platform is a facade. The question isn't whether to build one — it's how opinionated to make it, and who it's designed for.

Observer — manual reconciliation doesn't scale. The Kubernetes controller loop is the Observer pattern at cluster scale. Toil is a symptom of missing observers. Name the toil, find the missing observer, build or adopt it.

Command — pipelines that apply changes directly are fast and invisible. GitOps decouples the command from its execution, and that decoupling gives you auditability, reversibility, and drift detection almost for free.

Adapter and Glue — every platform connects systems that weren't designed to talk to each other. That connective tissue is glue. In most platforms, glue is 60-80% of what the team actually builds and maintains. The question isn't whether you'll write it — it's whether you'll design it intentionally.

Strategy — deployment strategy hardcoded into pipelines is a hidden coupling that makes platforms less reliable and harder to evolve. When strategy is a first-class platform concern — declarative, observable, interchangeable — the whole delivery system gets more consistent.


The thread that connects them

Looking back at these six patterns, there's a common thread.

Every one of them is about the same thing: separating concerns that are accidentally coupled, and making that separation explicit.

Factory separates what gets created from how it's created. Facade separates the consumer's interface from the system's complexity. Observer separates the thing that changes from the things that react to it. Command separates the definition of a change from its execution. Adapter separates incompatible interfaces without modifying either. Strategy separates the delivery mechanism from the deployment approach.

This isn't a coincidence. The Gang of Four didn't invent these patterns — they documented solutions that experienced engineers kept independently arriving at, because these couplings cause the same problems everywhere. In codebases. In distributed systems. In platforms.

The platform is a system. It has the same structural problems that software has. And it responds to the same structural solutions.


From glue spaghetti to platform control plane

Earlier in this series, we talked about glue — the code that connects systems that weren't designed to talk to each other. And we mentioned an evolution that a lot of platform teams go through:

scripts/
  deploy.sh
  create-service.sh
  add-monitor.sh
  update-ingress.sh

This is where most platforms start. It's not wrong — it's the natural first step when you're moving fast and the priority is getting things working.

But at some point, the scripts become a liability. They accumulate. They drift. They're owned by whoever wrote them last. They break in ways that are hard to debug because they have no reconciliation semantics, no observability, no clear interfaces.

The teams that scale past this point aren't necessarily the ones with the best tools. They're the ones that start treating the platform as a product with an architecture — not just a collection of scripts that happen to run in the right order.

That shift looks like:

  • Glue scripts become operators with proper reconciliation semantics
  • Ad-hoc pipelines become platform APIs with clear contracts
  • Scattered strategy implementations become a first-class delivery layer
  • Manual processes become observers that react automatically
  • Implicit conventions become explicit facades that abstract complexity

This is what a platform control plane looks like. Not a specific tool. A specific way of thinking about what the platform is and how it should behave.

And it maps almost exactly to the patterns in this series.


The vocabulary is the point

I want to be honest about what this series was and wasn't trying to do.

It wasn't trying to teach you Kubernetes. You already know Kubernetes. It wasn't trying to convince you to use Argo Rollouts or Crossplane or External Secrets Operator. Those are good tools, but the patterns exist independently of the tools.

What it was trying to do is give you vocabulary.

Vocabulary matters for a few reasons.

It makes architectural decisions explicit. When your team debates whether to expose Kubernetes directly to developers, "how much facade do we build" is a more productive framing than "should we use Backstage or not." The pattern gives the conversation a shape.

It helps you diagnose problems. When your pipelines are slow and tightly coupled, that's not just a pipeline problem — it's a symptom of missing separation of concerns. When you name the pattern that's missing or being violated, the solution space becomes clearer.

It connects you to a larger body of knowledge. The Gang of Four documented these patterns in 1994. Thirty years of software engineers have written about them, extended them, and argued about their limits. That literature is available to platform engineers too — it just needs translation.

And it levels the conversation. When a platform engineer and a software architect talk about the same problem using the same vocabulary, they can build on each other's thinking instead of talking past each other. The platform stops being "the infra people's problem" and starts being a shared architectural concern.


What comes next

This series ends here, but the topic doesn't.

We covered six patterns from the structural and behavioral categories — the ones most directly applicable to platform engineering. There's more to explore: architectural styles like event-driven systems and how they map to platform data planes, distributed systems patterns and how they show up in multi-cluster architectures, and the evolution from platform-as-infrastructure to platform-as-product.

If any of these posts sparked a question, a disagreement, or a "this is exactly what we're dealing with right now" — I'd genuinely like to hear about it.

Hit reply at blog@parraletz.dev. That's where the real conversation happens.