I just want to ask your opinion about HDFS block size. So I set HDFS block size to 24 MB and it's can run normally. I remember that 24 MB is not an exponential number (multiplication of 2) for the usual size on computer. So I want to ask all of you, what's your opinion with 24 MB?
Thanks all....
Yes. It is possible to set HDFS block size to 24 MB. Hadoop
1.x.x
default is 64 MB and that of2.x.x
is 128 MB.On my opinion increase the block size. Because, the larger the block size, less time will be utilized at the reducer phase. And things will speed up. However, if you reduce the block size, less time will be spent at each map phase, but chance are there that more time will be utilized at the reduce phase. Thereby increasing the overall time.
You can change the block size using the below command while transfereing from Local File System to HDFS:
Permanent change of block size can be made by changing the hdfs-site.xml to the below one: