Forked Helm Chart (a.k.a. Vendored Chart)

  • What it is: Copy the entire upstream chart into your repo; commit your own values.yaml (and sometimes template tweaks) in place. Upgrades are manual git merges from upstream.
  • Why teams pick it: Absolute control. You can patch templates, lock dependencies, and vendor images. Regulated orgs like banks love this because every rendered manifest lives in their audit boundary.
  • Watch‑outs: Upstream moves fast; your fork ages. Without automation, merges become thankless archaeology sessions.

Helm Umbrella / Wrapper Chart

  • What it is: Publish a thin “meta” chart that declares upstream charts as dependencies and ships a curated values.yaml. At deploy time, Helm pulls child charts from the registry.
  • Why teams pick it: Centralizes opinionated defaults (network policy, storage class, TLS settings) while tracking upstream semantically—just bump the version pin. Keeps chart sources out of your repo, shrinking diff noise.
  • Watch‑outs: Debugging flows through two layers of values. Newcomers often ask, “Which chart owns this resource?”—documentation matters.

Kustomize Base + Overlay

  • What it is: Common manifests live in a base/ directory. Environment overlays (overlays/dev, overlays/prod, etc.) patch the base via strategic‑merge or JSON patches.
  • Why teams pick it: Zero templating. Kustomize stays in pure YAML, aligns with “plain‑K8s‑manifest” purists, and keeps diffs minimal across environments.
  • Watch‑outs: Large overlay hierarchies get deep—and brittle. Patching complex CRDs can be painful (looking at you, Istio).

GitOps Monorepo

  • What it is: One repo—often named infra-live—contains every cluster, environment, and app manifest. Argo CD / Flux watches sub‑paths.
  • Why teams pick it: Single source of truth. Atomic PRs span environments; new developers grep once to find everything.
  • Watch‑outs: Repo size explodes, CI gets chatty, and RBAC granularity is blunt (write access to prod YAML equals prod permissions). At ~50 engineers, most orgs feel the pain.

Polyrepo Topologies

  • What it is: (Repo‑per‑team, Repo‑per‑app, or Repo‑per‑env) Split GitOps config across multiple repos. Security, networking, and each product team may own their own repo; prod sometimes earns a dedicated one.
  • Why teams pick it: Maps cleanly to Conway’s Law and access control—platform SREs can approve cluster‑wide changes without granting frontend devs prod keys.
  • Watch‑outs: Cross‑cutting upgrades (e.g., bumping Kubernetes minor) require N PRs. Consistency drifts unless you automate global migrations.

Argo CD App‑of‑Apps

  • What it is: A root Argo Application CR points at a folder of child Application CRs. Sync the root once; Argo instantiates the stack.
  • Why teams pick it: Cluster bootstrapping. Day‑0 you apply a single YAML and watch the platform assemble itself (ingress, cert manager, observability stack…).
  • Watch‑outs: Two abstraction layers mean two places to misconfigure sync options. Disaster recovery docs must cover restoring both root and children.

Programmatic IaC (Pulumi / CDK / Terraform CDK)

  • What it is: Define infra in TypeScript, Go, Python, or C#. Repos resemble regular software: src/, unit tests, modules. Deploy via pulumi up or cdk deploy.
  • Why teams pick it: Loops, conditionals, and strongly‑typed components enable true code reuse (think: for env in envs). IDE autocompletion beats stringly‑typed YAML.
  • Watch‑outs: Engineers must grok both the cloud provider and the SDK. Stateful back‑end (Pulumi stack or TF state) needs rigorous locking in CI.

Micro‑Stacks / Project‑per‑Domain (Programmatic IaC Variant)

  • What it is: Break a large Pulumi/Terraform CDK project into multiple smaller projects, each with its own state file. Network, database, and app infra evolve independently.
  • Why teams pick it: Safer blast radius. A database change can roll out without risking the VPC. Also shortens preview times in CI.
  • Watch‑outs: Inter‑stack dependencies become API contracts—publish outputs, import IDs, or wire them with Crossplane or Parameter Store.

Kubernetes Operator Packaging

  • What it is: Ship automation as a controller + CRD bundle (via Helm, OLM, or plain YAML). Ops teams deploy the operator once; developers create CRs to get complex services.
  • Why teams pick it: Encapsulates day‑2 logic. Backups, fail‑over, and upgrades live inside the operator code instead of fragile Bash.
  • Watch‑outs: You now depend on the operator’s maturity. Debugging a controller bug is harder than tailing a Pod log.

Crossplane Composition (Platform API Pattern)

  • What it is: Platform engineers author high‑level CRDs (e.g., CompositeDatabase) and map them to cloud primitives via Crossplane Composition templates. Developers request infra by committing YAML.
  • Why teams pick it: Self‑service without giving devs cloud console keys. Enforces guardrails and tagging in one place.
  • Watch‑outs: Operator team now owns a mini‑PaaS. Versioning compositions and breaking changes requires governance.