hadoop - HDFS replication factor - minimizing data loss risk -


edit - tl;dr:

do all replica nodes have store file (all of blocks) before write hdfs considered successful? if so, replication factor affect write latency?

original question:

in hadoop 2 can control number of data block replicas setting dfs.replication property value greater 1 (the default not 3 in hadoop distributions emr).

it's understanding hdfs behavior write first replica synchronously while others pipelined , replication happens in asynchronous fashion. correct?

if above true, there risk of data loss if first node sends ack namenode , gets hit meteorite before being able complete asynchronous replication.

is there way guarantee @ least number x of nodes write block before write considered successful? advisable so? though control using dfs.namenode.replication.min property read it's used when in "safe mode" , cannot during normal operations.

where did see replication not reliable? cloudera blog:

when files being written data nodes form pipeline write replicas in sequence. data sent through pipeline in packets (smaller block), each of must acknowledged count successful write. if data node fails while block being written, removed pipeline. when current block has been written, thename node re-replicate make missing replica due failed data node.subsequent blocks written using new pipeline required number of datanodes

if replicated blocks fail write fail , error returned hdfs write operation. operation not considered completd until of replicas have been written:

here specific details hdfs high availability. tl;dr last block verified across all replicas before overall write operation considered completed. not sufficient "fail". instead automatic failover occurs consisting of finding different datanode , writing failed block(s) it/them.

details on block replica failure detection:

http://blog.cloudera.com/blog/2015/02/understanding-hdfs-recovery-processes-part-1/

if last block of file being written not propagated datanodes in pipeline, amount of data written different nodes may different when lease recovery happens. before lease recovery causes file closed, it’s necessary ensure replicas of last block have same length; process known block recovery. block recovery triggered during lease recovery process, , lease recovery triggers block recovery on last block of file if block not in complete state (defined in later section).

details on block failure recovery:

during write pipeline operations, datanodes in pipeline may fail. when happens, underlying write operations can’t fail. instead, hdfs try recover error allow pipeline keep going , client continue write file. mechanism recover pipeline error called pipeline recovery.

i have experienced datanode / block write failures scores of times. have experienced successful writes "not really". , rare occurrences afaicr due corruption on physical disks.


Comments

Popular posts from this blog

angularjs - ADAL JS Angular- WebAPI add a new role claim to the token -

php - CakePHP HttpSockets send array of paramms -

node.js - Using Node without global install -