muling
muling

Reputation: 1

There are some issues when we test hawq on our hadoop cluster

  1. The hawq ssh'port is default 22. How can I specify other port, such as 333, 222?

  2. When we built pxf plugins(hdfs, service hbase, hive), I got itself rpm package, but when I rpm the pxf-service package, I got some errors:

    Failed dependencies: hadoop >= 2.6.0 is needed by pxf-service-0:3.0.0-root.noarch hadoop-hdfs >= 2.6.0 is needed by pxf-service-0:3.0.0-root.noarch.

I build the pxf plugins on hadoop 2.6.0 in cdh 5.4.0.

I would appreciate some recommendations.

Upvotes: 0

Views: 84

Answers (3)

Brian
Brian

Reputation: 1

You can change the default port that the ssh clients use by changing it it /etc/ssh/ssh_config. I have tested it before and it does work with gpssh.

Upvotes: 0

Sung Yu-wei
Sung Yu-wei

Reputation: 161

gpssh/hawq relies on the default ssh daemon. check /etc/ssh/sshd_config for default port change.

This is not a hawq configuration.

Upvotes: 0

Jon Roberts
Jon Roberts

Reputation: 2106

  1. I don't know if you can change the port for gpssh. I've never heard that request before.

  2. Cloudera has chosen to not be part of ODPi so HAWQ and PXF do not work with their distribution. Try Hortonworks instead.

Upvotes: 0

Related Questions