Description
Please, answer some short questions which should help us to understand your problem / question better?
- Which image of the operator are you using? -->
registry.opensource.zalan.do/acid/postgres-operator:v1.8.2
- **Where do you run it - cloud or metal? -->
Bare Metal K8s
- Are you running Postgres Operator in production? -->
yes
- Type of issue? -->
question
Hello,
when following the setup according to https://github.com/zalando/postgres-operator/blob/master/docs/administrator.md#azure-setup
apiVersion: v1
kind: ConfigMap
metadata:
name: pod-env-overrides
namespace: postgres-operator-system
data:
# Any env variable used by spilo can be added
USE_WALG_BACKUP: "true"
USE_WALG_RESTORE: "true"
CLONE_USE_WALG_RESTORE: "true"
WALG_AZ_PREFIX: "azure://container-name/$(SCOPE)/$(PGVERSION)" # Enables Azure Backups (SCOPE = Cluster name) (PGVERSION = Postgres version)
this all really works fine thus far. However :) when we were using S3 previously, we got a better "path" name for the storage location of the WAL segments, like so:
- template
WALE_S3_PREFIX=$WAL_S3_BUCKET/spilo/{WAL_BUCKET_SCOPE_PREFIX}{SCOPE}{WAL_BUCKET_SCOPE_SUFFIX}/wal/{PGVERSION}
- example:
s3://postgresql/spilo/mycluster-postgres/6e6599d0-81e1-4a63-8b28-f96f59160096/wal/13
IMHO, this is a much better storage location as it does allow to simply identify from what particular cluster those WAL segments came from. I think it would also allow to have clusters with the same name in different k8s namespaces. Therefore, I set out to try to replicate this pattern using Azure blob, but no dice.
I found this WALE_S3_PREFIX
seems to actually be created by spilo at https://github.com/zalando/spilo/blob/a86778bd601c4f6de98db9d207a8c1e6af31c984/postgres-appliance/scripts/configure_spilo.py#L892
prefix_env_name = write_envdir_names[0]
store_type = prefix_env_name[5:].split('_')[0]
if not wale.get(prefix_env_name): # WALE_*_PREFIX is not defined in the environment
bucket_path = '/spilo/{WAL_BUCKET_SCOPE_PREFIX}{SCOPE}{WAL_BUCKET_SCOPE_SUFFIX}/wal/{PGVERSION}'.format(**wale)
prefix_template = '{0}://{{WAL_{1}_BUCKET}}{2}'.format(store_type.lower(), store_type, bucket_path)
wale[prefix_env_name] = prefix_template.format(**wale)
# Set WALG_*_PREFIX for future compatibility
if store_type in ('S3', 'GS') and not wale.get(write_envdir_names[1]):
wale[write_envdir_names[1]] = wale[prefix_env_name]
What I did not find was how / where WALG_AZ_PREFIX: "azure://container-name/$(SCOPE)/$(PGVERSION)"
is actually been interpolated. It does not appear to happen in spilo, nor in wal-g; that pretty much leaves the operator?
- q1: can someone point me to where
WALG_AZ_PREFIX
is interpolated, so I can see if there is a chance to include e.g.$(WAL_BUCKET_SCOPE_SUFFIX)
- q2: is it even a good idea to include the cluster uid (
WAL_BUCKET_SCOPE_SUFFIX
) in this path in case of Azure? both, s3 and gs talk about buckets, but I think this is not a thing in Azure blob
Many thanks in advance!