MikiBelavista
MikiBelavista

Reputation: 2728

Connection to dev@redshift-cluster-1. failed

I am using DataGrip to connect to my AWS Redshift cluster. Unfortunately, the connection failed.

Connection to dev@redshift-cluster-1.cfioxtdojc5x.eu-central-1.redshift.amazonaws.com failed. [28000][10100] [Amazon][JDBC](10100) Connection Refused: [Amazon][JDBC](11640) Required Connection Key(s): UID; [Amazon][JDBC](11480) Optional Connection Key(s): AccessKeyID, AuthMech, AutoCreate, BlockingRowsMode, ClusterID, DbGroups, DisableIsValidQuery, DriverLogLevel, EndpointUrl, FilterLevel, IAMDuration, Language, loginTimeout,

I checked jdbc driver,it seems OK

enter image description here

Properties enter image description here

What else could cause this problem?

VPC?

Upvotes: 0

Views: 3603

Answers (1)

Golokesh Patra
Golokesh Patra

Reputation: 596

I guess you might be using wrong JDBC Redshift Driver or configuring Datagrip the wrong way. Please follow this documentation-( Hope it helps ) Redshift-Datagrip-Forum

I am Asuming you are using the latest 2019 Version,

I am using the version 1.2.1.1001 JDBC Redshift Driver version.

Step 1 -> Configuring Data Source with driver

Step 2 - > enter image description here

The Error you got happens usually because of -

  1. Driver Related Issues
  2. N/W Access ( VPC's , Subnets, SG's etc etc)
  3. If you are tunneling to an EC2 and then connecting to redshift then, you should be really careful about where is this VM ( VPC i.e point 2 ). You can do a telnet and check if the EC2 is able to connect to redshift or not.

Yesterday I came across a similar issue but that is not related to datagrip. I was running a spark Job on EMR when I encountered a similar issue .

ERROR -

19/08/28 08:54:17 ERROR RedshiftCommunicator: Error occured while Creating Redshift Tables. **Reason [Amazon][JDBC](10100) Connection Refused: [Amazon][JDBC](11640) Required Connection Key(s)**: PWD; [Amazon][JDBC](11480) Optional Connection Key(s): AccessKeyID, AuthMech, BlockingRowsMode, ClusterID, DbGroups, DisableIsValidQuery, DriverLogLevel, EndpointUrl, FilterLevel, IAMDuration, Language, loginTimeout, OpenSourceSubProtocolOverride, plugin_name, profile, Region, SecretAccessKey, SessionToken, socketTimeout, ssl, sslcert, sslfactory, sslkey, sslpassword, sslrootcert, SSLTruststore , SSLTrustStorePath, tcpKeepAlive, TCPKeepAliveMinutes, unknownLength
java.sql.SQLNonTransientConnectionException: [Amazon][JDBC](10100) Connection Refused: [Amazon][JDBC](11640) Required Connection Key(s): PWD; [Amazon][JDBC](11480) Optional Connection Key(s): AccessKeyID, AuthMech, BlockingRowsMode, ClusterID, DbGroups, DisableIsValidQuery, DriverLogLevel, EndpointUrl, FilterLevel, IAMDuration, Language, loginTimeout, OpenSourceSubProtocolOverride, plugin_name, profile, Region, SecretAccessKey, SessionToken, socketTimeout, ssl, sslcert, sslfactory, sslkey, sslpassword, sslrootcert, SSLTruststore , SSLTrustStorePath, tcpKeepAlive, TCPKeepAliveMinutes, unknownLength
    at com.amazon.exceptions.ExceptionConverter.toSQLException(Unknown Source)
    at com.amazon.jdbc.common.BaseConnectionFactory.checkResponseMap(Unknown Source)
    at com.amazon.jdbc.common.BaseConnectionFactory.doConnect(Unknown Source)
    at com.amazon.jdbc.common.AbstractDriver.connect(Unknown Source)
    at com.amazon.redshift.jdbc.Driver.connect(Unknown Source)
    at java.sql.DriverManager.getConnection(DriverManager.java:664)
    at java.sql.DriverManager.getConnection(DriverManager.java:208)
    at com.hp.ta.utils.RedshiftCommunicator.recreateReportTables(RedshiftCommunicator.java:237)
    at com.hp.ta.apps.spark.CpuMemSilverJob.run(CpuMemSilverJob.java:63)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.hp.ta.controllers.AppRunner.main(AppRunner.java:55)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Command exiting with ret '1'

The Above issue was solved by -

  1. Properly Configuring the IAM role Attached to the EMR with Redshift get_credentials Policy.
  2. Also ensured same IAM Role is also being read by my spark code via spark env configs.

I know the above is a bit un-related but it might help you debugging the issue.

Another Request -

Can you please update your description with a few more details like -

  1. Are you using SSH Tunneling ?
  2. Security Groups ( is your IP Whitelisted for redshift accessing ) ( which i guess it is because the error is about credentials )
  3. Have you tried connecting to redshift via SQL WorkbenchJ ?

An advice out of practice , You should also start with keeping your Redshift cluster encrypted ( atleast at server level using services like KMS or any other service ).

Upvotes: 2

Related Questions