Kafka has established itself as a crucial bit of foundation in many organizations. It proves its worth when high-scale applications are built, which brings an array of challenges simultaneously. Any downtime directly impacts the customers and ultimately the business. Nevertheless, upgrading such a crucial infrastructure is also necessary to ensure better performance. Kafka cluster migration without downtimes is the scenario that you will ideally wish for, but is it that easy?
Kafka Migration can be required due to the following major reasons:
- Hardware update,
- Disk failures, and
- On-premise to cloud to movement or vise versa, etc
Occasions will arise when you have to move your running cluster to a different set of hardware. So the question is, can you accomplish it without the customers realizing, i.e., without any downtime? Since it is not automated yet, the procedure is a little laborious. To make the procedure go more smoothly, here is the list of tasks that must be completed.
Steps To Migrate Kafka Cluster
Zookeeper is utilized behind the scenes by Kafka for health checks & cluster coordination. Although it doesn’t really matter which component is moved first, migrating Kafka brokers first is preferable because it is far easier than migrating zookeepers.
Migrating Kafka Brokers
Imagine that the presently running brokers consist of broker ids 10, 20, and 30. Now, assume that the new brokers consist of broker ids 40, 50, and 60. The objective is to decommission the presently operating brokers and commence new brokers without zero downtime.
Step #0: Expand cluster by incorporating brokers 40, 50 and 60
Kafka cluster expansion is an easy task. Simply start broker on the new servers and then confirm that they are a member of the source cluster.
Step #1: Increase The Replication Factor
The method of increasing the replication factor of topics has to be initiated manually but is completely automated. Kafka has a partition reassignment tool that adjusts the replication factor involving any topic. It operates in three modes:
- Execute, and
Step #2: Decommission The Old brokers
Simply shift the DNS to the new brokers. Also, the old brokers must be shut down.
Step #3: — Shrink The Replication Factor To Its Initial Value
It’s Time To Migrate The Zookeepers
Let us assume that the currently running zookeepers consist of zookeeper ids 10, 20, and 30 respectively. Also consider the new zookeepers having zookeeper ids 40, 50, and 60 respectively. The objective is to decommission the currently running zookeeper while starting the new zookeepers with zero downtime.
Step #0: Expand The Cluster By Incorporating Zookeeper 40
Steps for expanding zookeeper cluster:
- Start zookeeper 40 with an updated configuration file having a new zookeeper entry. Ensure that this zookeeper instance is following the leader after joining the cluster.
- Perform a rolling restart of all other instances of zookeeper within the cluster. It must be done with the updated configuration file (comprising 4 members).
Remember: You should restart all the followers prior to doing the same for the leader.
- Ensure that the cluster 10, 20, 30 and 40 are in quorum and also serving requests.
Now, there is a 4-membered zookeeper cluster 10, 20, 30, and 40.
Next, you have to decommission a zookeeper instance. You can choose to decommission either of the former zookeeper instances if instance 40 is now the leader. If not, you should decommission the followers first, considering that instance 30 is the leader.
Step #1: Decommission Zookeeper 10
It is now time to remove instance 10 from the cluster. It is better not to bother instance 10 with restarts/configuration changes.
Steps for decommissioning zookeeper 10:
- Firstly, instance 10 must be shut down.
- From the configuration file (having instances 20, 30, 40), discard instance 10 entry and perform a rolling restart.
Remember: Prior to restarting the leader, you must restart all the followers.
- Ensure that the cluster 20,30 and 40 is in quorum and also serving requests.
Presently, there is a 3 member zookeeper cluster 20, 30, and 40.
The zookeeper migration from 10, 20, 30 to 20, 30, 40 is concluded.
( For instance 50, you have to repeat the same steps)
Step #2: Now, Expand The Source Cluster By Adding Zookeeper 50
Step #3: Decommission Zookeeper 20
(For instance 60, you have to repeat the same steps)
Step #4: Expand The Source Cluster By Incorporating Zookeeper 60
Note: Instances 10 and 20 have been decommissioned now. Before decommissioning instance 30, setting them to the latest instances 40, 50, and 60 and performing a rolling restart of all the brokers is important.
Step #5: Decommission Zookeeper 30
With this, the process of zookeeper migration from 10, 20, 30 to 40, 50, 60 is complete!
To end with, you might have to restart some customers. Additionally, you might also restart the Kafka manager or the other monitoring tools being utilized when the migration is finished. The Kafka experts from Ksolves are adept in confluent Kafka development and Kafka Strimzi cluster deployment. To know more about Kafka migration and clear your doubts, feel free to contact us!
Call : +91 8130704295
Read related articles:
Integrating Apache NiFi and Apache Kafka