Description
Describe the issue
Once keeper cluster is started it is not clear how to properly change its configuration regarding cluster topology involving:
- change of node id
- change of hostname endpoint
- adding new node
Response on stackoverflow (https://stackoverflow.com/questions/76066618/how-to-add-a-new-ch-keeper-node-to-the-existing-cluster) mentions configuration is stored both in configuration file and in raft store (?) and keeper diffs config file at runtime and applies changes to raft store.
It seems like e.g. change # 2 above (modifying hostname for a given node) should be made to config file with the keeper is started on the node where the config file is edited however when the hostname is changed keeper logs:
{} <Trace> KeeperDispatcher: Configuration update triggered, but nothing changed for Raft
When keeper is restarted the configuration is considered invalid:
{} <Warning> RaftConfiguration: Config will be ignored because a server with ID 1 is already present in the cluster on a different endpoint (10.11.1.11:9234). The endpoint of the current servers should not be changed. For servers on a new endpoint, please use a new ID.
At this point is not clear how to change node id or hostname and also what settings can/should be edited when keeper runs (to update the raft store) and which can or should be edited with keeper service turned off.
Additional context
If topology-related changes require clearing/resetting whole cluster settings (e.g. as a result of design decision?) it is not clear how to do it.
For sure it is not enough to delete "coordination" data:
# rm -rf /var/lib/clickhouse/coordination/*
as there is still configuration data saved in the raft store with not explanation how to
- clear the raft store
- backup and restore the raft store