i

Hadoop Tutorial

Write Operations In HDFS

In this section, I will explain the HDFS read operations in detail.

Fig: HDFS Write operation

  1. An HDFS client initiates write operation by calling 'create()' method of DistributedFileSystem object, which creates a new file to write.

  2. The object DistributedFileSystem connects to the NameNode by RPC call and initiates the creation of a new file. This file, however, produces an operation that does not connect any blocks with the file. It is NameNode' primary responsibility to verify that the file does not exist already, and the client has proper permissions to create a new file. If a file already available or the client does not have sufficient authority to create a file, then the client receives an IOException. Otherwise, the operation is successful, and the NameNode creates a new record for the file.

  3. Once a new record is generated in NameNode, it returns the client an object of type FSDataOutputStream. A client uses this to write data into the HDFS. The data write method is invoked.

  4. FSDataOutputStream contains the DFSOutputStream object that handles the communication with NameNode and DataNodes. While the client continues writing data, the DFSOutputStream continues creating packets with this data. These packets wait into a queue, which is called DataQueue.

  5. There is another DataStreamer component that consumes this DataQueue. DataStreamer also asks NameNode for the allocation of new blocks, thereby picking desirable DataNodes to be used for replication.

  6. Now, the replication process begins with the creation of a pipeline using DataNodes. As we have chosen a replication level of 3, there will be 3 DataNodes in the pipeline.

  7. The DataStreamer pours the packets into the pipeline's first DataNode.

  8. All the DataNodes in the pipeline stores packet received by it and forwards the same to pipeline's second DataNode.

  9. DFSOutputStream maintains another queue, ' Ack Queue ', to store packets waiting that are waiting for acknowledgement from DataNodes.

  10. Once all DataNodes in the pipeline receive acknowledgement for a packet in the queue, it is removed from the ' Ack Queue'. In case of a failure of DataNode, packets from this queue will be used to reinitiate the process.

  11. After a client is done with writing data, it calls a close() method Call to close(), results in flushing the remaining data packets into the pipeline and waiting for an acknowledgement.

  12. Once a final acknowledgement is received, NameNode is contacted to tell it that the file write operation is complete.