Keycloak in Standalone Clustered Mode (K8) 'BCFIPS provider not found'

Hi everyone,

We are trying to get Keycloak 14 server to run with BCFIPS/BCJSSE security providers while in Clustered mode (standalone-ha.xml config) in a kubernetes cluster. Some notes: we are using the keycloak-containers repo which provides us with the Docker image setup and then we use Codecentric keycloak helm chart (https://github.com/codecentric/helm-charts/tree/master/charts/keycloak) to deploy in k8 pods. We use a custom kc.java.security invoked via a JAVA_OPT to set our BCFIPS and BCJSSE order of precedence

When we have autoscaling disabled (1 single pod) our BCFIPS/BCJSSE providers work fine and we can see logs of their use. When we enable autoscaling (enabled at helm chart level), we create two more replica pods, giving us 3 total. We add the KUBE_PING logic in our keycloak.yaml (pic below)

We also add rbac rules in values.yaml like this:
Screen Shot 2022-06-08 at 7.05.26 PM

Our issue shows itself once these modifications are applied to the helm chart and we deploy. We can access the admin console just fine, but when we test our SMTP connection from admin console (triggers SSL/TLS handshake), we get a bunch of ‘java.security.NoSuchProviderException: no such provider: BCFIPS’, ‘java.security.KeyManagementException: Default key/trust managers unavailable’, ‘ava.security.NoSuchAlgorithmException: Unable to invoke creator for DEFAULT: Default key/trust managers unavailable’. To be clear, we can trigger the SMTP connection just fine when running in a single pod.

Our point of confusion: we don’t understand why we can invoke our BCFIPS provider within a single pod, but cant when there are multiple pods (autoscaling turned on). Any insight into registering custom security providers in clustered mode would be great. We have all our providers registered in standalone-ha.xml, but is there more configuration needed when autoscaling is turned on? Its odd that our app works fine within a single k8 pod, but cant seem to see any of our config when running with 2 more replica pods (3 total pods).

Further, we notice that there are different logs in each pod – the provider error only seems to show up in our 2 new pod replicas, but not the other, even though it gets redeployed with the 2 new ones.

Any insight into running keycloak in standalone clustered mode in kubernetes with BouncyCastle FIPS and BouncyCastle JSSE would be great!
Thanks!