Close call: the Bitwarden CLI incident and what we take from it
On 22 April 2026 a tampered version of the Bitwarden CLI package was served via npm for a good 90 minutes. Why we narrowly escaped — and which unspectacular things in our setup were responsible.
On the evening of 22 April 2026, a compromised npm package was published under the name @bitwarden/cli@2026.4.0. According to the official Bitwarden statement, the version was available on npm between 17:57 and 19:30 ET, before Bitwarden revoked access, withdrew the release, and pushed a clean version 2026.4.1. The incident is connected to the broader Checkmarx supply-chain incident. A CVE for the affected version has been announced.
Bitwarden themselves write clearly that neither end-user vault data nor their own production systems were compromised. Affected are only those who installed the CLI version 2026.4.0 fresh from npm during the roughly 90-minute window.
At Moselwal we narrowly escaped — and that has less to do with luck than with a few very unspectacular decisions in our build and deployment setup. This post puts the incident in context and describes what concretely held.
What exactly happened
Briefly and without drama: via the Checkmarx attack vector, attackers were able to manipulate a release of the official Bitwarden CLI package on npm and publish it under the regular version number 2026.4.0. The package stayed available for about an hour and 33 minutes. Bitwarden's recommendations for affected systems boil down to:
- uninstall
@bitwarden/cliat version 2026.4.0 globally, - clear the npm cache (
npm cache clean --force), - temporarily disable
installscripts during cleanup (npm config set ignore-scripts true), - rotate every secret that was present on a potentially affected system — in particular environment variables, API tokens, and SSH keys,
- review CI workflows and GitHub activity for anomalies,
- upgrade to
@bitwarden/cli@2026.4.1.
Soberly considered, this is a textbook supply-chain risk in a package registry: the legitimate code state in the repository was never compromised — the distribution path was.
Why we even use the CLI
At Moselwal we run our own Vaultwarden server — server-side API-compatible with Bitwarden — and use the Bitwarden CLI for automation scenarios. That we still keep classic passwords and API keys there isn't ideological backsliding, it's pragmatic reality.
For a whole range of customer systems and third-party services there's simply no way around the shared secret: older content management systems, hosting panels, FTP/SFTP access, legacy APIs, the odd SaaS dashboard, and database tools that support neither OPKSSH nor passkeys. Where possible, we move to OIDC, OPKSSH, or WebAuthn/passkeys. Where not, the password stays — and the best answer then is a centrally managed, auditable vault rather than a text file on a developer's laptop.
Concretely we access Vaultwarden in two ways: locally on developer machines when scripts or tooling need credentials that can't sensibly be handled via single sign-on, and in CI, to enable short-lived access to customer systems during deployments or maintenance tasks where modern auth isn't available. Those are exactly the two places where the manipulated @bitwarden/cli release could have hurt us properly.
Why Moselwal wasn't hit
The central point is simple: in the affected window, nobody on our side ran npm install -g @bitwarden/cli on a production or CI path. That's not a coincidence — it's a consequence of how our pipelines are built.
No "latest" in CI
Our CI doesn't install "some" Bitwarden CLI. In the repositories that need it, it's pinned as a dev dependency in package.json at an exact version. Installation runs exclusively via:
npm ci
npm ci installs strictly from package-lock.json and refuses if the lockfile and package.json diverge. New or changed versions don't slip into the build via a spontaneous mirror drift — only via an explicit lockfile commit. At the time of the incident, our lockfiles pinned an older, long-vetted version — the compromised 2026.4.0 could only have been pulled in after an active update commit.
One defined entry point — locally and in CI
Locally the entry point runs through a Makefile. Unspectacular, but with a pleasant side effect: there is exactly one documented way to install and invoke tooling. A typical excerpt looks like this:
.PHONY: tools bw-login bw-export
NODE_BIN := ./node_modules/.bin
tools:
npm ci
bw-login: tools
$(NODE_BIN)/bw config server vault.moselwal.internal
$(NODE_BIN)/bw login --apikey
bw-export: bw-login
$(NODE_BIN)/bw sync
$(NODE_BIN)/bw get item "$(ITEM)"
In CI we use the same idea, just packaged differently: instead of Makefiles, GitLab CI Components are used. The components encapsulate the same installation via npm ci and the same controlled invocation of the CLI, so developer side and pipeline side structurally do the same thing. The result is identical in both worlds: a defined, version-pinned invocation rather than a spontaneous installation.
Updates exclusively via Renovate
Dependency updates don't happen ad hoc from the shell at our place. We let Renovate drive them. Renovate opens pull requests against our repositories when a new version of a package appears and updates package.json and package-lock.json together. They merge automatically once the pipeline is green — but the catch is the cool-off period before that: we hold new versions back for a while before Renovate even offers them as updates. The exception is when a critical or high-severity CVE is announced for the currently installed version; then we update fast.
The effect in this concrete case: even if Renovate could theoretically have seen version 2026.4.0, the cool-off period would have prevented it from flowing through as an update. By the time the minimum age would have expired, Bitwarden had long since deprecated the version and shipped 2026.4.1. No automation would have carried the compromised version unnoticed into production.
In summary
The protection layers that held are deliberately prosaic: reproducible installations via npm ci against a committed lockfile. A defined entry point — locally via Makefile, in CI via GitLab CI components. Version changes go through Renovate with a cool-off period, not from the shell. No global npm install -g calls in runners. None of it is spectacular, and that's exactly the point.
What we're adjusting anyway
Even though we weren't affected, we take a few things from the incident. We'll harden our npmrc settings in build environments so that ignore-scripts=true is the default for tooling installations at minimum, unless we explicitly need install scripts. We'll review which repositories still have global installations (npm install -g) showing up in documentation or scripts, and move them to local dev dependencies. And we'll tighten our Renovate cool-off period for particularly sensitive packages — where a few hours used to be enough, "at least 48 hours old" is fine, to structurally catch cases like this one.
We're also continuously working to reduce the number of scenarios in which long-lived passwords are needed at all. OPKSSH and passkeys where they work, OIDC and short-lived tokens where the other side allows it. The vault stays — but it should be as empty as possible.
Conclusion
The incident on 22 April 2026 is a good, undramatic occasion to look at your own build and distribution paths. If you install packages globally and without lockfiles, run updates straight from the shell, or auto-merge brand-new releases unfiltered, you carried real risk that evening. If you have lockfiles, reproducible CI installations, and a cool-off period between registry and production in place, very unspectacular structure protected you that evening.
For us, this boring stack — npm ci, Makefile and GitLab CI components, Renovate with cool-off — served its purpose. It's not particularly clever, but it's enough, and that's usually the best thing you can say about a setup in a security context.
Source of the incident: Bitwarden Statement on Checkmarx Supply Chain Incident, Bitwarden Community Forums, 23 April 2026.