Not suited for little records – Hadoop works better with the modest number of vast documents, not with a substantial number of little documents as the overhead included invalidates the advantage. Additionally having a substantial number of little documents will over-burden the namenode which stores metadata about the records.
Also Read: How Hadoop Works?
Better for bunch preparing – Hadoop MapReduce programming model is basically a group handling framework. It doesn’t bolster preparing gushed information.
Slower handling – In Hadoop information is circulated and prepared over the group. There is plate I/O included, at first information square is perused from a circle, middle of the road delineate yield is again composed to the circle, that information after some handling by Hadoop structure is again perused by diminishing stage. Every one of these means adds to inactivity. There is no in-memory preparing offered by Apache Spark.