i

Hadoop Tutorial

Features of Hadoop

Let us discuss the essential features of Hadoop in below section:

Open source:

Apache Hadoop is an open-source project. This means that its code can be modified following business requirements.

Distributed Processing:

As data is stored in HDFS throughout the cluster in a distributed manner, data is processed on a cluster of nodes in parallel.

Fault Tolerance:

By default, three replicas of each block are stored in Hadoop throughout the cluster and can also be changed as required. So if any node comes down, with the assistance of this feature, information on that node can be readily retrieved from other nodes. The frame automatically recovers node or task failures. That is how tolerant of fault is Hadoop. It is one of the most important features of Hadoop.

                                                          Fig: Features of Hadoop

Reliability:

Despite machine failures, information is reliably stored on the computer cluster due to data replication in the cluster. In the Hadoop cluster, if any node fails, it will not affect the whole cluster. Instead, another node will replace the failed node. Hadoop cluster will continue functioning as nothing has happened. Hadoop has a built-in fault tolerance feature.

High availability:

Despite hardware failure owing to numerous copies of information, information is highly available and accessible. If a machine or a few hardware crashes, information from another route will be obtained.

Scalability:

Hadoop is exceptionally scalable in how to add new hardware to the nodes readily. This Hadoop function also offers horizontal scalability, which implies that without any downtime, new nodes can be added on the fly.

Economic Apache:

As it operates on a commodity hardware cluster, Hadoop is not very costly. No specialized machine is needed for it. Hadoop also offers enormous cost savings, as adding more nodes on the fly here is very simple. So if the demand rises, it can also boost nodes without any downtime and much pre-planning.

Easy to use:

No customer needs to handle distributed computing; all things are taken care of by the framework. So it is simple to use this Hadoop function.

Data Locality:

This one is Hadoop's unique features that made Big Data easy to handle. Hadoop operates on the principle of data locality, which states that computation is moved to information rather than computation information.