
Control block report storm
When Datanodes come up in large clusters with more than 200 nodes, Namenode will be overwhelmed by the block reports and this can cause Namenode to become unresponsive.
Getting ready
This recipe makes more sense for large clusters, not in terms of the number of nodes, but the number of blocks in the cluster.
How to do it...
- ssh to Namenode and edit the
hdfs-site.xml
file to add the following property to it:<property> <name>dfs.blockreport.initialDelay</name> <value>20</value> </property>
- Copy
hdfs-site.xml
across all nodes in the cluster. - Restart HDFS daemons across the nodes for the property to take effect:
$ stop-dfs.sh $ start-dfs.sh
How it works...
The dfs.blockreport.initialDelay
parameter specifies the time in seconds. This is the upper limit of the allotted time, and it is chosen randomly by all Datanodes. What it means is that a few Datanodes can take the value of 1, others may be 2, and a few others 10, but the maximum values are capped at 20 seconds.
Now, instead of the block report being sent immediately when the Datanodes come up, it will be delayed by the specified number of seconds. This will reduce the block advertisement storm on Namenode.