iT邦幫忙

2021 iThome 鐵人賽

DAY 8
0
Modern Web

『卡夫卡的藏書閣』- 程序猿必須懂的Kafka開發與實作系列 第 8

卡夫卡的藏書閣【Book8】- Kafka 手動重新選舉 Partition Leader

  • 分享至 

  • xImage
  •  

“I cannot make you understand. I cannot make anyone understand what is happening inside me. I cannot even explain it to myself.”
― Franz Kafka, The Metamorphosis
也許跟老高說的一樣我們就是來修煉的


今天是接續上一天模擬其中一個 broker 掛掉然後又恢復後的狀態,因為有兩個 Partition Leader 同時在單一 broker 上,這不符合我們希望各 broker 平均分攤流量的目標,因此今天會簡單示範如何手動重新選舉 Partition Leader

  • 每個 Kafka 的 partition 會有一個 Leader,而每個 Leader 會有0個或多個的跟隨者 ( follower )
  • Leader 會負責 partition 的所有讀寫,follower 們會去跟 Leader 拿資料,就跟一般的 consumer 一樣

1. 首先,查看一下目前 topic 分配的狀態

$ kafka-topics --describe --zookeeper 127.0.0.1:2181 --topic topicWithThreeBroker

Topic: topicWithThreeBroker	TopicId: BAocHAwHR_STmwAUlI3YMw	PartitionCount: 3	ReplicationFactor: 2	Configs:
	Topic: topicWithThreeBroker	Partition: 0	Leader: 1	Replicas: 1,0	Isr: 1,0
	Topic: topicWithThreeBroker	Partition: 1	Leader: 2	Replicas: 2,1	Isr: 1,2
	Topic: topicWithThreeBroker	Partition: 2	Leader: 2	Replicas: 0,2	Isr: 2,0

2. 接著,創建一個 json 檔,填入要重新選舉 leader 的 topic 和所屬的 partition

$ vim leader_election.json
新增內容如下
{ "partitions":
  [
    { "topic": "topicWithThreeBroker", "partition": 0 },
    { "topic": "topicWithThreeBroker", "partition": 1 },
    { "topic": "topicWithThreeBroker", "partition": 2 }
  ]
}

3. 用 kafka-leader-election.sh 重新選舉 Leader

$ kafka-leader-election --path-to-json-file leader-election.json --election-type preferred --bootstrap-server :9092

Successfully completed leader election (PREFERRED) for partitions topicWithThreeBroker-2
Valid replica already elected for partitions topicWithThreeBroker-2

這邊可以看到只有 partition2 重新進行了選舉

4. 查看重新選舉完後的 topic 狀態

$ kafka-topics --describe --zookeeper 127.0.0.1:2181 --topic topicWithThreeBroker

Topic: topicWithThreeBroker	TopicId: BAocHAwHR_STmwAUlI3YMw	PartitionCount: 3	ReplicationFactor: 2	Configs:
	Topic: topicWithThreeBroker	Partition: 0	Leader: 1	Replicas: 1,0	Isr: 1,0
	Topic: topicWithThreeBroker	Partition: 1	Leader: 2	Replicas: 2,1	Isr: 1,2
	Topic: topicWithThreeBroker	Partition: 2	Leader: 0	Replicas: 0,2	Isr: 2,0

這邊可以看到原本 Broker2 上擠了兩個 Partition Leader,重新選舉後又平均分散了,這樣可以避免單一機器 loading 比例過重

這時如果再執行一下選舉,會發現會顯示訊息表示沒有需要重新選舉的 partition,因為已經 partition leader 已經分派平均了

$ kafka-leader-election --path-to-json-file leader_election.json --election-type preferred --bootstrap-server :9092
Valid replica already elected for partitions

KafkaController 在新增 partition 的自動選舉策略

在上一天的模擬中,我們將 broker0 關掉後,其實是由 KafkaController 去幫 partition2 自動重新選舉新的 leader,並且在每次 Isr 發生變動後去通知每一台的 broker 去更新 metadataCache 的資訊,更甚者在為某個 topic 新增 partition 時,也是由 KafkaController 去作自動重新選舉、分配的動作。

1. 跟上面一樣的 topic 分配的狀態

$ kafka-topics --describe --zookeeper 127.0.0.1:2181 --topic topicWithThreeBroker

Topic: topicWithThreeBroker	TopicId: BAocHAwHR_STmwAUlI3YMw	PartitionCount: 3	ReplicationFactor: 2	Configs:
	Topic: topicWithThreeBroker	Partition: 0	Leader: 1	Replicas: 1,0	Isr: 1,0
	Topic: topicWithThreeBroker	Partition: 1	Leader: 2	Replicas: 2,1	Isr: 1,2
	Topic: topicWithThreeBroker	Partition: 2	Leader: 2	Replicas: 0,2	Isr: 2,0

2. 為 topic topicWithThreeBroker 新增6個 partition

$ kafka-topics --zookeeper 127.0.0.1:2181 --topic topicWithThreeBroker --alter --partitions 9

WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!

3. 查看自動選舉的策略

$ kafka-topics --describe --zookeeper 127.0.0.1:2181 --topic topicWithThreeBroker

Topic: topicWithThreeBroker	TopicId: BAocHAwHR_STmwAUlI3YMw	PartitionCount: 9	ReplicationFactor: 2	Configs:
	Topic: topicWithThreeBroker	Partition: 0	Leader: 1	Replicas: 1,0	Isr: 1,0
	Topic: topicWithThreeBroker	Partition: 1	Leader: 2	Replicas: 2,1	Isr: 2,1
	Topic: topicWithThreeBroker	Partition: 2	Leader: 2	Replicas: 0,2	Isr: 2,0
	Topic: topicWithThreeBroker	Partition: 3	Leader: 1	Replicas: 1,2	Isr: 1,2
	Topic: topicWithThreeBroker	Partition: 4	Leader: 2	Replicas: 2,1	Isr: 2,1
	Topic: topicWithThreeBroker	Partition: 5	Leader: 0	Replicas: 0,2	Isr: 0,2
	Topic: topicWithThreeBroker	Partition: 6	Leader: 1	Replicas: 1,2	Isr: 1,2
	Topic: topicWithThreeBroker	Partition: 7	Leader: 2	Replicas: 2,1	Isr: 2,1
	Topic: topicWithThreeBroker	Partition: 8	Leader: 0	Replicas: 0,2	Isr: 0,2

可以看到 KafkaController 預設的自動分配策略就是將 partition 平均分派到各 broker 上


上一篇
卡夫卡的藏書閣【Book7】- Kafka 實作新增 Topic
下一篇
卡夫卡的藏書閣【Book9】- Kafka Partition Reassign
系列文
『卡夫卡的藏書閣』- 程序猿必須懂的Kafka開發與實作30
圖片
  直播研討會
圖片
{{ item.channelVendor }} {{ item.webinarstarted }} |
{{ formatDate(item.duration) }}
直播中

尚未有邦友留言

立即登入留言