How are you managing Supabase credentials across environments without things drifting?
One recurring issue we’ve been seeing with Supabase setups is not the database itself, but how credentials are managed across environments. The common pattern looks something like:
credentials stored in .env files or secrets managers
multiple environments (dev, staging, prod)
manual propagation or duplication across those environments
It works, but over time it seems easy for things to drift:
a key gets rotated in one environment but not others
a redeploy misses an env var
credentials get misconfigured during setup or migration
We’ve seen this cause failures that have nothing to do with application logic, just the surrounding setup. A few approaches we’ve come across:
.env files per environment + validation checks before deploy
centralized secrets (GCP, AWS, etc.) reused across services
scripts/tests to ensure required env vars are present
Curious how others here are handling this in practice.
Are you fully relying on your cloud provider’s secrets layer?
How are you handling rotation across multiple environments?
Have you found a setup that actually eliminates drift, or is it mostly managed with guardrails?
Would be especially interesting to hear if the move toward JWT signing keys is changing how people think about this layer.



Replies
I’ve honestly moved away from relying on Supabase for anything critical, largely because of how fragile credential handling can become across environments (and the risk of unexpected deactivation on top of that).
What’s worked much better for me is treating credentials as infrastructure, not app config.
A few things that have helped eliminate drift almost completely:
Single source of truth (cloud secrets manager)
All environments pull from one place with environment-scoped paths—not duplicated .env files. No manual syncing.
Immutable deployments
Builds don’t “inject” secrets manually. The runtime fetches them dynamically or via the platform (e.g. container-level injection). That way, a missed env var during deploy just can’t happen.
Strict startup validation
The app fails fast if anything is missing or malformed. No partial boots.
Automated rotation strategy
Instead of rotating per environment, I rotate centrally and let services pick up changes via versioned secrets. This avoids the “rotated in prod but forgot staging” problem entirely.
Environment parity by design
Dev/staging/prod all use the same structure—only the values differ. If your structure differs, drift is guaranteed.
CreateOS
@stephennjiu The immutable deployments + runtime fetch pattern is super clean conceptually.
The only tradeoff I’ve seen with setups like this is the amount of infra you end up maintaining around it, has that stayed manageable for you, or does it start to become a system of its own over time?
In my setup, I avoid duplicating credentials across environments as much as possible. Instead, I generate environment-specific keys but manage them through one pipeline. Rotation is automated and deployments always re-fetch values. It doesn’t fully eliminate drift but it makes it very predictable and easier to debug.
CreateOS
@lanfranco_iwanaga This makes a lot of sense, especially the predictable drift part as that’s usually the best you can aim for in practice.
Curious, with the pipeline handling generation + rotation, do you ever run into friction when debugging across environments, or has that stayed fairly clean over time?
Bench for Claude Code
Personally, I love relying on GCP secrets, so that it takes care of the rotation itself whenever I choose to, and backend gets updated in real time. Local development runs on a local supabase instance, and the setup script also takes care of generating and storing credentials in local .env files, out of the repository (even if these are rather harmless credentials), then the secrets are just set as env variables in the production VMs
The real problem is that it's a bit harder to feed the same credentials to frontend codebases, and I'm yet to find the best compromise. One way that seems to be working for now is to "mount" a static json file that directly comes from the secrets, and frontend has direct access to it (these are not real "secrets" of course, but supa needs its anon key and I just want to handle all credentials coherently, and be able to manage secrets rotation outside of the codebase / deploy process, as much as possible).
It's kind of working for now :) At least this is not forcing me to rebuild all frontend at every rotation, which is kind of nice
CreateOS
@matteo_avalle This is a really interesting setup, especially the JSON mounting approach to avoid frontend rebuilds. The frontend side is exactly where I’ve seen the most friction too, especially keeping things consistent without leaking too much or forcing redeploys.
Have you found that approach holding up well as things scale, or does it start to get tricky to manage over time?
Bench for Claude Code
@eric_nodeops I genuinely feel like it has the potential to get there, but it is indeed tricky. Even here we haven't really addressed ALL the issues key rotation brings - mostly due to our setup, as we leverage frontend-supabase interactions for authentication, but then everything happens through a backend layer in the middle, that pushes the problem to the backend.
In general, I'd say that trying to make sure frontend "keeps up" with the hardening processes on backend side is overall a very complex process REGARDLESS on your size, mostly because it's kind of a new thing: frontend keys are literally "publicly available, compromised by default" keys, so nobody ever felt the need to change them often, before Supabase became popular. This is because it's both a painful devops problem, but also an even more painful frontend problem: let's assume you find a way to change your keys easily, but can you also force people to refresh their pages in real time, anytime you apply a change? Also, what happens to your JWT tokens, the moment keys change? Will you still recognize the old ones, or are you kicking people out at every rotation?
Being a "modern" problem, the support for it is kind of limited so in my company we tried to avoid having to reinvent the wheel once again, and tie this issue to our more common "multi tenant" scaling problem: we have loads of subdomains, one per specific customer, and we'd really love to avoid having to keep a lot of different frontend builds just for that, considering the codebase is always the same, and the UX changes are just driven through a configuration envfile. So we replaced the .env with the json, as our own way to deal with that: we got a single frontend container image, and the only difference between the various instances is indeed the mounted json file. This made our multi tenant policies easier to scale, while also making key rotation quite doable. But still, caching and JWT tokens are the outstanding problems you should solve if you are using supa more directly: you may be forced to re-download that very same JSON configuration file at every API call or so, to be sure you always have the latest keys
CreateOS
@matteo_avalle This is a great breakdown. The JWT + cache layer you mentioned feels like the real unsolved piece, even if you solve distribution, invalidation and session continuity are still messy. The JSON approach for multi-tenant setups is clever though, especially keeping a single frontend build and pushing config at runtime.
Curious, have you explored handling this more at the platform or deployment layer instead of the app layer, or does that just shift complexity elsewhere in your experience?
CreateOS
@jacksonliu_ Using Vercel as the source of truth for envs is interesting, especially with vercel env pull for local parity. Agree on migrations too, a lot of drift issues are actually schema-related rather than just credentials.
Have you ever run into friction when stepping outside the Vercel ecosystem, or has that setup been flexible enough for your use cases so far?
@eric_nodeops Yeah that’s a fair question — and honestly, this is where the “Vercel as source of truth” approach starts to show its limits a bit.
For anything fully inside Vercel, it’s been very smooth. Local parity via vercel env pull removes a ton of cognitive overhead, and for projects like ProEdit (Next.js + Supabase), I haven’t really felt friction.
Where it does get tricky is when you introduce:
non-Vercel services (scripts, workers, CI outside Vercel)
local tools that don’t naturally integrate with Vercel CLI
or multi-cloud setups
In those cases, Vercel stops being a universal source of truth and becomes more like a “primary registry,” and you still need some glue.
What’s worked reasonably well for me is:
treating Vercel as the write source
exporting envs into other systems (CI, scripts) rather than redefining them
keeping a typed env schema (e.g. Zod validation) so missing/misaligned vars fail fast
and avoiding per-env divergence unless absolutely necessary
So I wouldn’t say it eliminates drift — but it reduces it to something manageable with guardrails instead of constant manual sync.
If I were scaling beyond this, I’d probably introduce a proper secrets layer (e.g. AWS/GCP) as the upstream source and let Vercel consume from that, rather than the other way around.
CreateOS
@jacksonliu_ That makes a lot of sense, especially the idea of Vercel becoming more of a “primary registry” once you step outside its boundaries.
Feels like the pattern you’re describing is:
→ single write source
→ distribute outward
→ add validation to catch drift early
Which works well, but still requires some glue as the surface area grows. What’s been interesting on our side is seeing some teams try to push that whole layer a bit closer to the deployment itself, so the environment and service wiring are resolved at deploy time rather than managed across tools.
In theory it reduces the need to sync or export things between systems, but I’m curious how you think about that tradeoff compared to keeping a central source like Vercel or a cloud secrets manager.
We keep it simple with per-environment .env files and a validation script that runs pre-deploy to ensure all required Supabase vars (SUPABASE_URL, SUPABASE_ANON_KEY, SUPABASE_SERVICE_ROLE_KEY) are present and non-empty. For production, secrets live in the hosting provider's env config, never in checked-in files. Key rotation is manual but infrequent the validation step catches misses before they hit prod. It's not drift-proof, but the guardrails have prevented every "missing env var" outage so far.
CreateOS
@satish_pophale This is refreshingly simple, and honestly probably what most setups end up looking like in practice. The validation step before deploy is a nice guardrail.
Have you found that this still holds up as projects or environments grow, or do you start to feel the limits of the manual rotation / per-env setup over time?