Bill of Delivery — Or: Whose Problem Is It Anyway?
An SBOM tells you what's in a container. But which containers in which versions make up a release — that's a different question. On composition, the Open Component Model, NixOS+Podman, and why the Bill-of-Delivery debate is shaped largely by the K8s ecosystem.

A few days ago, an article by Matthias Bruns appeared on appetizers.io — Software Bills of Delivery: Beyond SBOMs with Component Models — positioning the Open Component Model (OCM) as the next evolutionary step after classic SBOMs. The post is worth reading because it addresses a real problem: an SBOM describes what is inside a single container, but it says nothing about which containers belong to a release as a set.
I arrive at a different conclusion than the author on the proposed solution — less because I think OCM is wrong, and more because I'm asking a different question: not “how do I solve the problem”, but “where do I have it in the first place?” My impression is that the entire Bill-of-Delivery discussion is heavily shaped by the K8s ecosystem and shifts considerably once you look one layer deeper.
I find that point interesting enough for a post because it illuminates something larger: how many of our “DevSecOps tooling necessities” are actually answers to problems we created for ourselves through our architectural choices?
High-level: what's this even about?
Imagine you're delivering software. That software consists of several parts — an application, a reverse proxy in front of it, perhaps a scheduler, a key-value store, a few configuration files. During an audit or a security incident, you want to be able to answer:
- What's inside each individual part? That's the classic SBOM — Software Bill of Materials. List of all libraries, licenses, versions.
- Who built it and how? That's provenance, keyword SLSA. Which commit, which runner, which inputs.
- Which parts in which versions belong to Release 2.4.0, atomically as a set? That's the composition — and this is exactly where the Bill-of-Delivery discussion comes in.
Point 3 is the interesting one. An SBOM describes what's inside a container. But it says nothing about which containers in which versions make up a particular release. That's exactly the gap concepts like the Open Component Model (OCM) try to close.
But: this gap doesn't exist everywhere equally. It's primarily a consequence of K8s' design decisions — not a bug, but a result of the requirements K8s was optimized for: dynamic systems, independently versioned deployments, decentralized composition as a deliberately chosen architecture.
Why K8s structurally has the composition gap
A typical K8s deploy is conceptually fragmented — and for good reasons. K8s is built for dynamic systems with independently versioned components; composition is deliberately distributed so that operator lifecycles, service updates, and cluster platform components don't have to move in lockstep.
In concrete terms:
- Container images live in one registry, with tags that are mutable and can drift over time.
- Helm charts live in another registry or chart repo.
values.yamland overrides live in a GitOps repo.- ConfigMaps and secrets are injected through external mechanisms (External Secrets Operator, Vault Agent Injector, whatever).
- CRDs and operator versions have their own lifecycles.
- Service mesh configuration lives in yet other repos.
- Ingress routing comes from cluster platform charts.
There's simply no single place that authoritatively states: this is Release X, here are all its components with their digests, signed as one atomic unit. Helm tries, but only covers its subcharts. Flux/Argo work at the GitOps level, not the artifact level. Tags drift, ImagePullPolicy varies, operators are versioned outside app releases.
From this deliberate decentralization, a real need arises for a bundle layer that “binds it all together atomically” — when you want to roll out releases as closed units or promote them between environments. OCM positions itself exactly there. It's more than just a transport format — it defines a descriptor and a reference model for how components express their artifacts, provenance, and cross-component references. Conceptually well done. The need is legitimate, the solution holds up.
What NixOS + Podman does structurally differently
We know both worlds. With customers, we operate K8s clusters and work in exactly the tooling landscape I sketched above — composition fragmentation isn't a theoretical problem for us, it's daily business. For our hosting infrastructure operated with German hosters, we've deliberately chosen multiple NixOS VPS with Podman instead. This isn't a value judgment on the platforms, but a different architectural choice with different tradeoffs. What's interesting: what shows up as a separate tooling problem in the K8s context is a side effect of the configuration in the NixOS context.
A NixOS container definition looks roughly like this:
virtualisation.oci-containers.containers.app = {
image = "registry.example.com/app@sha256:abc123...";
ports = [ "127.0.0.1:8080:8080" ];
environment = { ... };
dependsOn = [ "kv-store" "scheduler" ];
};
virtualisation.oci-containers.containers.proxy = {
image = "registry.example.com/proxy@sha256:def456...";
# ...
};
Three properties stand out immediately:
Digest pinning is the default. You can use tags, but for production you write digests. Tag drift is constructively excluded — not by discipline or a Renovate bot.
Composition lives in the same file as everything else. Configuration, firewall rules, systemd units, reverse proxy, backup timers, TLS certificates — all in one language, one evaluation step. No “coordinating between three tools”.
nixos-rebuild is atomic. Either the entire new generation activates, or the old one stays. Rollback is a boot menu entry, not a contraption with footnotes.
With one limitation that's fair to mention: Nix describes the desired state, not the actual runtime state. External dependencies — SaaS APIs, third-party-managed services, databases outside the configuration — fall outside the model. What Nix gives you is a very clean, deterministic description of the build and deploy layer. The runtime truth still has to be uncovered through observability — but that applies to any stack.
In our context, the tuple of release module (with the container set configuration) and flake.lock, versioned in Git and signed, is the more complete form of composition — because it's simultaneously the deployment truth, not just its description. What OCM additionally offers — cross-registry distribution, airgap transport, multi-cluster promotion — we don't need at this point. In a K8s context with distributed targets, the calculation would come out differently.
What you still need — regardless of K8s or NixOS
This is often abstracted incorrectly: “if I have Nix, I don't need DevSecOps anymore.” Wrong. Nix solves the composition layer. The other layers remain unchanged in their relevance.
SLSA provenance per image
For each container build, your pipeline produces an in-toto statement with predicate type slsa.dev/provenance/v1. It documents:
- which commit was built,
- on which runner and with which builder image,
- which inputs went into the build (
resolvedDependencies).
Important: the base image belongs in those inputs — as a digest, not a tag. This creates a verification chain you can trace back during an audit:
App-Provenance ─ baseImage: php-base@sha256:xyz
│
└── own provenance ─ baseImage: wolfi-base@sha256:...
│
└── ...
Attach to the image via cosign attest --type slsaprovenance. This is the lower layer of the supply chain, and it's orthogonal to your deployment model.
Image signing with Cosign
With self-hosted GitLab, keyless Sigstore is currently not an option (the Sigstore Public Good trust root expects an OIDC identity that works cleanly for gitlab.com but doesn't establish trust for self-managed instances out of the box). So key-based signing with short-lived build keys from Vault. Every image that goes into the production registry is signed.
OpenVEX for vulnerability hygiene
Wolfi images are extremely lean, but Trivy/Grype still regularly report CVEs in transitive libraries whose vulnerable code paths aren't even reached. Instead of documenting that in a Confluence doc or a ticket system, you write OpenVEX statements: signed, machine-readable, in the OCI registry as an attestation alongside the image. Per CVE, one clear statement — affected, not_affected with justification, fixed, under_investigation.
This massively reduces audit noise and is exactly the mechanism that's missing from a pure SBOM check. For ISO 27001 or TISAX, it's worth gold, because an auditor no longer has to ask “why is this high CVE open for three weeks”, but reads “not affected, because X” with signature and date.
System SBOM from the Nix closure
A Nix derivation is essentially a very precise component description — content-addressed, deterministic, with a complete closure graph. Tools like genealogos export this to CycloneDX. For our NixOS hosts, that means: a system SBOM falls out almost as a by-product — and the reproducibility is real, not claimed.
With one caveat: a Nix closure SBOM isn't one-to-one equivalent to a classically maintained SBOM. License metadata, maintainer fields, and a few other standard attributes don't come along automatically; depending on audit requirements, you have to supplement them from Nix package definitions or fill them in via additional tools. Doable, but not a free ride.
Verify-on-deploy
The most important step that often gets forgotten: what isn't verified isn't signed. In the NixOS stack, the nixos-rebuild path lends itself well — a cosign verify as a build step ensures that a configuration referencing an image that doesn't validate against the signature key fails the build. Production hosts only get configs that have already passed the verify step.
In the K8s context, the equivalent place is Kyverno or the Sigstore Policy Controller — admission webhooks that reject unsigned images at scheduling time. Mechanically different, semantically identical: what isn't signed doesn't run.
The honest assessment: when does OCM pay off?
I don't consider OCM marketing fluff. It solves a real problem — just not mine. Concretely, it pays off when:
- you have to atomically transfer multi-artifact releases between registries (build registry → customer cluster registry, or region A → region B),
- you're operating in a K8s world where composition fragmentation is real,
- you have cross-cluster promotion workflows that get painful with plain
helm upgrade.
It doesn't pay off when:
- your composition already lives in a deterministic configuration language (Nix, Bazel rules, Cue configs),
- you have exactly one registry from which you deploy,
- you've already cleanly solved the “composition as first-class artifact” property by other means.
Concretely, this means two different answers for us depending on the context: in customer K8s setups, OCM is a building block worth examining — especially when multi-artifact promotion between environments or cross-registry transfer becomes a topic. For our own NixOS infrastructure, we solve the composition property differently, without having to introduce an additional bundle format. Both answers are clean in their respective contexts.
What I take from the original article
Terminology hygiene matters. “Software Bill of Delivery” is not an established standard — neither NTIA, nor CISA, nor OWASP, nor CNCF use the term. That's not a reproach to an author who uses it as a descriptive umbrella term. But anyone who adopts it in formal external communication or with auditors makes interoperability unnecessarily hard. The established acronyms are SBOM, SLSA, in-toto, VEX, Sigstore. Auditors recognize them. That's why I use them in my own documentation as well.
Take the mental model anyway. Atomic composition with a provenance chain is the right mental lens — regardless of the tooling you implement it with. With NixOS, it's release modules plus flake.lock plus signed images with provenance. With K8s, it's OCM or a self-built manifest plus Sigstore. The properties are the same, the tools are different.
DevSecOps architecture is architecture. Many tooling necessities disappear when you choose the architecture one layer down correctly. That's not an argument against K8s — there are good reasons for cluster platforms, and scaling needs is one of them. But it is an argument against choosing K8s because “everyone uses it” and then having to engage with the associated tooling landscape, when the actual requirements don't call for it.
Take-away
The question isn't “do I need a Bill of Delivery?” The question is: “where does my stack already bring the composition property along, and where do I have to add it on?”
In K8s, add it on — through bundle formats, Sigstore policies, GitOps discipline. In NixOS hosts with Podman, it's there as a side effect — all that's left is making it visible and signed.
Provenance per image, OpenVEX for audit hygiene, and verify-on-deploy remain mandatory in both worlds. The rest depends on which composition problem your stack has handed you — or spared you from.
Both paths are legitimate. K8s with OCM or an equivalent bundle layer is a consistent answer to a consistent problem. NixOS with Podman is a different — not superior — answer that resolves the problem at a different point. What counts isn't the choice itself, but that the properties end up cleanly in place: atomic composition, traceable provenance, signature, verification. Which path leads there depends on the stack, the team, the customer, the scaling requirements.
That's exactly why I find engagements like the one with the Appetizers post valuable: not to be right, but to more clearly name the properties we're all trying to achieve — and to see how many different paths can lead there.