Reputation: 2868
I'm trying to connect to a hadoop cluster via pyarrows' HdfsClient
/ hdfs.connect()
.
I noticed pyarrows' have_libhdfs3()
function, which returns False.
How does one go about getting the required hdfs support for pyarrow? I understand there's a conda command for libhdfs3, but I pretty much need to make it work through some "vanilla" way that doesn't involve things like conda.
If it's of importance, the files I'm interested in reading are parquet files.
EDIT:
The creators of hdfs3
library have made a repo that allows installing libhdfs3:
http://hdfs3.readthedocs.io/en/latest/install.html
Upvotes: 0
Views: 5350
Reputation: 2868
On ubuntu this worked for me -
echo "deb https://dl.bintray.com/wangzw/deb trusty contrib" | sudo tee /etc/apt/sources.list.d/bintray-wangzw-deb.list
sudo apt-get install -y apt-transport-https
sudo apt-get update
sudo apt-get install libhdfs3 libhdfs3-dev
It should work on other Linux distros as well using the appropriate installer. Taken from:
http://hdfs3.readthedocs.io/en/latest/install.html
Upvotes: 0
Reputation: 105561
I don't know of a way to get libhdfs3 except through conda-forge, or building from source. You will need to conda install libhdfs3=2.2.31
since there was a breaking API change that made libhdfs3 have a different ABI from libhdfs that we have not addressed in Arrow yet. See https://issues.apache.org/jira/browse/ARROW-1445 (patches welcome)
Upvotes: 1