Is possible to set hadoop blocksize 24 MB?

173 views Asked by At

I just want to ask your opinion about HDFS block size. So I set HDFS block size to 24 MB and it's can run normally. I remember that 24 MB is not an exponential number (multiplication of 2) for the usual size on computer. So I want to ask all of you, what's your opinion with 24 MB?

Thanks all....

2

There are 2 answers

1
V Sree Harissh On BEST ANSWER

Yes. It is possible to set HDFS block size to 24 MB. Hadoop 1.x.x default is 64 MB and that of 2.x.x is 128 MB.

On my opinion increase the block size. Because, the larger the block size, less time will be utilized at the reducer phase. And things will speed up. However, if you reduce the block size, less time will be spent at each map phase, but chance are there that more time will be utilized at the reduce phase. Thereby increasing the overall time.

You can change the block size using the below command while transfereing from Local File System to HDFS:

hadoop fs -D dfs.blocksize=<blocksize> -put <source_filename> <destination>

Permanent change of block size can be made by changing the hdfs-site.xml to the below one:

<property> 
<name>dfs.block.size<name> 
<value>134217728<value> 
<description>Block size<description> 
<property>
0
ss sreekanth On

Yes, It is possible to set block size in the Hadoop environment. Simply go to /usr/local/hadoop/conf/hdfs-site.xml then change block size value Refer: http://commandstech.com/blocksize-in-hadoop/