xinit
xinit

Reputation: 1916

Using s3 as fs.default.name or HDFS?

I'm setting up a Hadoop cluster on EC2 and I'm wondering how to do the DFS. All my data is currently in s3 and all map/reduce applications use s3 file paths to access the data. Now I've been looking at how Amazons EMR is setup and it appears that for each jobflow, a namenode and datanodes are setup. Now I'm wondering if I really need to do it that way or if I could just use s3(n) as the DFS? If doing so, are there any drawbacks?

Thanks!

Upvotes: 1

Views: 5998

Answers (4)

Zhen Zeng
Zhen Zeng

Reputation: 61

https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/core-default.xml

fs.default.name is deprecated, and maybe fs.defaultFS is better.

Upvotes: 1

Kiran Karnam
Kiran Karnam

Reputation: 35

I was able to get the s3 integration working using

<property>
        <name>fs.default.name</name>
        <value>s3n://your-bucket-name</value>
</property> 

in the core-site.xml and get the list of the files get using hdfs ls command.but should also should have namenode and separate datanode configurations, coz still was not sure how the data gets partitioned in the data nodes.

should we have local storage for namenode and datanode?

Upvotes: 0

Florin Diaconeasa
Florin Diaconeasa

Reputation: 76

in order to use S3 instead of HDFS fs.name.default in core-site.xml needs to point to your bucket:

<property>
        <name>fs.default.name</name>
        <value>s3n://your-bucket-name</value>
</property>

It's recommended that you use S3N and NOT simple S3 implementation, because S3N is readble by any other application and by yourself :)

Also, in the same core-site.xml file you need to specify the following properties:

  • fs.s3n.awsAccessKeyId
  • fs.s3n.awsSecretAccessKey

fs.s3n.awsSecretAccessKey

Upvotes: 5

mat kelcey
mat kelcey

Reputation: 3135

Any intermediate data of your job goes to HDFS, so yes, you still need a namenode and datanodes

Upvotes: 1

Related Questions