How to connect two keycloak instances acros separate servers

I am trying to connect two keycloak instances across separate servers, but I can not do this. Can somebody help me?

Here is my source code for docker containers and ifinispan config

INFINISPAN

<!--
  ~ Copyright 2019 Red Hat, Inc. and/or its affiliates
  ~ and other contributors as indicated by the @author tags.
  ~
  ~ Licensed under the Apache License, Version 2.0 (the "License");
  ~ you may not use this file except in compliance with the License.
  ~ You may obtain a copy of the License at
  ~
  ~ http://www.apache.org/licenses/LICENSE-2.0
  ~
  ~ Unless required by applicable law or agreed to in writing, software
  ~ distributed under the License is distributed on an "AS IS" BASIS,
  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  ~ See the License for the specific language governing permissions and
  ~ limitations under the License.
  -->

<infinispan
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="urn:infinispan:config:14.0 http://www.infinispan.org/schemas/infinispan-config-14.0.xsd"
    xmlns="urn:infinispan:config:14.0">
    <jgroups>
        <stack name="postgres-jdbc-ping-tcp" extends="tcp">
            <TCP external_addr="${env.JGROUPS_DISCOVERY_EXTERNAL_IP:127.0.0.1}" />
            <JDBC_PING
                connection_driver="org.postgresql.Driver"
                connection_username="${env.KC_DB_USERNAME}"
                connection_password="${env.KC_DB_PASSWORD}"
                connection_url="jdbc:postgresql://${env.KC_DB_URL_HOST}:${env.KC_DB_URL_PORT:5432}/${env.KC_DB_URL_DATABASE}${env.KC_DB_URL_PROPERTIES:}"
                initialize_sql="CREATE SCHEMA IF NOT EXISTS ${env.KC_DB_SCHEMA:public}; CREATE TABLE IF NOT EXISTS ${env.KC_DB_SCHEMA:public}.JGROUPSPING (own_addr varchar(200) NOT NULL, cluster_name varchar(200) NOT NULL, bind_addr varchar(200) NOT NULL, updated timestamp default current_timestamp, ping_data BYTEA, constraint PK_JGROUPSPING PRIMARY KEY (own_addr, cluster_name))"
                insert_single_sql="INSERT INTO ${env.KC_DB_SCHEMA:public}.JGROUPSPING (own_addr, cluster_name, bind_addr, updated, ping_data) values (?, ?, '${env.JGROUPS_DISCOVERY_EXTERNAL_IP:127.0.0.1}', NOW(), ?)"
                delete_single_sql="DELETE FROM ${env.KC_DB_SCHEMA:public}.JGROUPSPING WHERE own_addr=? AND cluster_name=?"
                select_all_pingdata_sql="SELECT ping_data, own_addr, cluster_name FROM ${env.KC_DB_SCHEMA:public}.JGROUPSPING WHERE cluster_name=?"
                clear_sql="DELETE FROM ${env.KC_DB_SCHEMA:public}.JGROUPSPING WHERE cluster_name=?"
                info_writer_sleep_time="500"
                remove_all_data_on_view_change="true"
                stack.combine="REPLACE"
                stack.position="MPING"
            />
        </stack>
    </jgroups>
    <cache-container name="keycloak">
        <transport lock-timeout="60000" stack="postgres-jdbc-ping-tcp" />
        <local-cache name="realms" simple-cache="true">
            <encoding>
                <key media-type="application/x-java-object" />
                <value media-type="application/x-java-object" />
            </encoding>
            <memory max-count="10000" />
        </local-cache>
        <local-cache name="users" simple-cache="true">
            <encoding>
                <key media-type="application/x-java-object" />
                <value media-type="application/x-java-object" />
            </encoding>
            <memory max-count="10000" />
        </local-cache>
        <distributed-cache name="sessions" owners="2">
            <expiration lifespan="-1" />
        </distributed-cache>
        <distributed-cache name="authenticationSessions" owners="2">
            <expiration lifespan="-1" />
        </distributed-cache>
        <distributed-cache name="offlineSessions" owners="2">
            <expiration lifespan="-1" />
        </distributed-cache>
        <distributed-cache name="clientSessions" owners="2">
            <expiration lifespan="-1" />
        </distributed-cache>
        <distributed-cache name="offlineClientSessions" owners="2">
            <expiration lifespan="-1" />
        </distributed-cache>
        <distributed-cache name="loginFailures" owners="2">
            <expiration lifespan="-1" />
        </distributed-cache>
        <local-cache name="authorization" simple-cache="true">
            <encoding>
                <key media-type="application/x-java-object" />
                <value media-type="application/x-java-object" />
            </encoding>
            <memory max-count="10000" />
        </local-cache>
        <replicated-cache name="work">
            <expiration lifespan="-1" />
        </replicated-cache>
        <local-cache name="keys" simple-cache="true">
            <encoding>
                <key media-type="application/x-java-object" />
                <value media-type="application/x-java-object" />
            </encoding>
            <expiration max-idle="3600000" />
            <memory max-count="1000" />
        </local-cache>
        <distributed-cache name="actionTokens" owners="2">
            <encoding>
                <key media-type="application/x-java-object" />
                <value media-type="application/x-java-object" />
            </encoding>
            <expiration max-idle="-1" lifespan="-1" interval="300000" />
            <memory max-count="-1" />
        </distributed-cache>
    </cache-container>
</infinispan>```

**KEYCLOAK-1**

docker run --rm --name keycloak-1 -p 8080:8080
-e KEYCLOAK_ADMIN=admin
-e KEYCLOAK_ADMIN_PASSWORD=admin
-e KC_DB_URL_HOST=192.168.178.91
-e KC_DB_URL_DATABASE=p-keycloak
-e KC_DB_USERNAME=keycloak
-e KC_DB_PASSWORD=password
-e JGROUPS_DISCOVERY_EXTERNAL_IP=192.168.178.88
-e KC_CACHE_CONFIG_FILE=cache-ispn-jdbc-ping.xml
-v ${PWD}/cache-ispn-jdbc-ping.xml:/opt/keycloak/conf/cache-ispn-jdbc-ping.xml
Quay start-dev


**KEYCLOAK-2**

docker run --rm --name keycloak-2 -p 8080:8080
-e KEYCLOAK_ADMIN=admin
-e KEYCLOAK_ADMIN_PASSWORD=admin
-e KC_DB_URL_HOST=192.168.178.91
-e KC_DB_URL_DATABASE=p-keycloak
-e KC_DB_USERNAME=keycloak
-e KC_DB_PASSWORD=password
-e JGROUPS_DISCOVERY_EXTERNAL_IP=192.168.178.88
-e KC_CACHE_CONFIG_FILE=cache-ispn-jdbc-ping.xml
-v ${PWD}/cache-ispn-jdbc-ping.xml:/opt/keycloak/conf/cache-ispn-jdbc-ping.xml
Quay start-dev

**POSTGRES**

docker run -d --rm --name p-keycloak
-p 5432:5432
-e POSTGRES_DB=p-keycloak
-e POSTGRES_USER=keycloak
-e POSTGRES_PASSWORD=password
postgres:latest

First, try to run keycloak containers on different ports (keycloak-1: 8080, keycloak-2: 8081 for example). Second, you should start keycloak on prod mode (I don’t think clusterization will work on dev mode). You will need certificates for that.
Also, since you are not using docker-compose(my propose is to switch to it), create docker network and use the same network for all containers: keycloak-1, keycloak-2 and postgres.

Good luck! You shoudl read this here: Configuring distributed caches - Keycloak plus all the docs related to high availability (or the cross-dc setup how was named a while back Multi-site deployments - Keycloak - and keep in mind that not all of this is implemented already). Plus you should read everything in the Server Administration guide related to cross-dc.

Or you can take the slightly easier route I took once and connect them logically (using a federation that communicates with remote keycloaks in plain https - no need to infinispan or db sync at a technology level). However, it was much much harder to make absolutely everything work seamlessly between multiple deployments of keycloak as if everything would be in the same system, while keeping the original data in its place. Took more than 6 months to fully implement all of this (the auth flow with the login + all the behaviour done in the rest of the flows like password reset, all the required actions behavior + all the custom functionality which was already build)

@stancristian88 Thank you for your message. I’m exploring the possibility of connecting two Docker containers with Keycloak without using Docker Swarm or Kubernetes. In my latest attempt, I managed to connect these three containers and found indications in the Keycloak container logs that the cluster was recognized. Additionally, I observed two Keycloak instances in the JGROUPSPING database. However, when I began testing this setup, Keycloak was unable to share sessions, users, etc. Any insights or suggestions on resolving this issue would be greatly appreciated.

Oh! I see now. So I think you want to use Jgroups JDBC_PING way of creating the cluster. I do not have now a code sample but I had the same problem and I can highlight what you can do, I hope you will find the answer. Hint: is in this forum, in some older question.

So first: what is the problem? The modern, Qaurkus based Keycloak does not support out of the box the JDBC_PING mechanism that was so easy to use in the older, Wildly versions. You will see in the Keycloak documentation in this page All configuration - Keycloak the config called cache-stack. And the only accepted values are: tcp, udp, kubernetes, ec2, azure, google.

So not jdbc_ping. However, there is a way to do it without too much hastle. You will notice in your Keycloak docker container in the path /opt/keycloak/conf/cache-ispn.xml (or something like that, as I said I do not have the container in front of me but should be very easy to find). And you can very easily copy that default file in the place where you build your keycloak container, and in the Dockerfile where you customize your Keycloak you just need to override that file. And in the cache-ispn.xml you can add a jgroups config before the cache-container block, something like this:

<jgroups>
     <stack name="jdbc-ping-tcp" extends="tcp">
    // ...
     </stack>
</jgroups>
  
  <cache-container name="keycloak">

And with this extension everything should work. You can inspire from this thread (found it meanwhile: Use of JDBC_PING with Keycloak 17 (Quarkus distro) - #38 by keepthemomentum)

However, there is a small caveat. In our case we used a custom port, not the default one (those were the rules set in the cloud provider, does not matter). So we had to look exactly what is the default configuration, and what is the java property from which the port is read and pass that to the Docker container when starts. But if you go with the default port you will have no problem with the approach described in the original thread I’ve shared, I made this setup work following the steps