Prasanna
Prasanna

Reputation: 2641

Setting dfs.blocksize to 100Kb in Hadoop

I try to set the dfs.blocksize in Hadoop to 100Kb which is less than the default dfs.namenode.fs-limits.min-block-size, which is 1MB.

When I copy the file like

hdfs dfs -Ddfs.namenode.fs-limits.min-block-size=0 -Ddfs.blocksize=102400 inp.txt /input/inp.txt

I still get,

copyFromLocal: Specified block size is less than configured minimum value (dfs.namenode.fs-limits.min-block-size): 102400 < 1048576

I tried to add this property in hdfs-site.xml as well. But dfs.namenode.fs-limits.min-block-size does not seem to change.

How else would I change this property?

Upvotes: 2

Views: 2892

Answers (1)

David Kjerrumgaard
David Kjerrumgaard

Reputation: 1076

Try changing the value of the dfs.namenode.fs-limits.min-block-size property in the /etc/hadoop/conf/hdfs-site.xml file and restarting the NameNode, as this may be a final property which cannot be overridden by a command line setting.

Upvotes: 2

Related Questions