Reputation: 103
I need to move files written by a Hive job that look like this
/foo/0000_0
/foo/0000_1
/bar/0000_0
into a file structure that looks like this
/foo/prefix1/prefix2-0000_0
/foo/prefix1/prefix2-0000_1
/bar/prefix1/prefix2-0000_0
before migrating this out of the cluster (using s3distcp). I've been looking around hadoop fs but I can't find something that would let me do this. I don't want to rename file by file.
Upvotes: 3
Views: 188
Reputation: 1006
first, you need to create the sub directory inside /foo. For this use following command
$hdfs dfs -mkdir /foo/prefix1
this will create a sub directory in /foo. if you want to create more subdirectory inside prefix1 use this same command recursively with updated path structure.In case you are using an older version of Hadoop (1.x) replace hdfs by hadoop.
now you can move files from /foo to /foo/prefix1 using the following command.Here newfilename can be any name you want to give to your file.
$hdfs dfs -mv /foo/filename /foo/prefix1/newfilename
Hope this answer your query
Upvotes: 1