Reputation: 2200
I need assistance as I'm having trouble connecting to a managed Postgres database on RDS. I'm encountering the following error message: "no pg_hba.conf entry for host '16.151.149.51', user 'analytics', database 'database', no encryption" I have confirmed that encryption is functioning properly, and I've also added the IP to the security groups. What steps should I take to resolve this issue?
Upvotes: 59
Views: 80758
Reputation: 427
Another possible cause/solution: if you told RDS to use AWS Secrets Manager to store the creds (in my case, Aurora PG Serverless v2), by default the password will be rotated every seven days, and most importantly, will be rotated very soon after the DB is created.
The delay before initial rotation is long enough for you to pick up the initial password from Secrets Manager, and then try to use it. Often, when you try to use it, it has since been rotated, and if you check Secrets Manager again, you'll see the new password, which should eventually work.
When the password is auto-rotated, it takes a while for RDS to pick-up the new password from Secrets Manager. During the period between when the password is rotated, and when RDS recognizes the new password, you'll get this misleading error about pg_hba.conf.
The simple solution is to either wait a bit, or restart the cluster so RDS will immediately pick-up the new password from Secrets Manager.
Just guessing, but I tend to think a lot of the "disable SSL" solutions simply benefit from the necessary cluster restart, and disabling SSL isn't actually necessary - it wasn't in my case.
A couple of points here:
Upvotes: 0
Reputation: 2521
First of all I wanna note that Nick's answer resolved my issue, but I just would love to add a detailed steps to follow for those who's new to AWS:
By following these steps, you should be able to successfully modify the rds.force_ssl parameter in your Amazon RDS instance. And hopefully the connection issue would be resolved.
Note: this method removes the default SSL security in the connection and shouldn't be used in production databases.
Upvotes: 116
Reputation: 31
Getting the same issue while pulling data from RDS using glue 2.0 but working fine in glue 3, 4
So below is workaround for glue 2
First downloaded the jar file of postgres from https://jdbc.postgresql.org/download/
After Adding below parameter it is working fine
--extra-jars -> path/to/s3/postgresql-42.6.2.jar
--user-jars-first -> true
Upvotes: 0
Reputation: 364
You need to add the cert chain to your cert store manually.
RDS uses a self-signed certificate that isn't going to be verified by a public CA. You can download the certificate chain for your region from here.
Place your .pem certificate chain in /usr/share/ca-certificates
Edit /etc/ca-certificates.conf and add your certificate name there.
(Look at update-ca-certificates
man page for more information.)
Then run sudo update-ca-certificates
Try to connect to your instance
If you're using an application like Authelia, you may instead need to provide the certificate directly to your application. In the case of Authelia, that would be in the /config/certificates directory. This was my personal case.
In case the link to the certificate chains ever dies, try finding an AWS article with the text "Download certificate bundles for Amazon RDS".
Upvotes: 0
Reputation: 493
I used the "Set up EC2 connection" from Connected compute resources section inside RDS settings, and it started working
Upvotes: 0
Reputation: 447
If you're using Engine 15 or higher:
When setting up a database in RDS, the default parameter group for postgres15 (default.postgres15) is used. However, we need to change the 'rds.force_ssl' parameter, which isn't editable in the default.postgres15 group. To do this, we'll create a new parameter group for postgres15, which allows us to make edits.
Once the new parameter group is created, we'll select it and find the 'rds.force_ssl' parameter. We'll change its value from 1 to 0 (the default is 1).
Then, in the database configuration tab, we'll switch the 'DB instance parameter group' from the default group to the new one.
After making these changes, we'll reboot the database and try connecting again. This should work.
Steps to Follow
Upvotes: 43
Reputation: 482
The approach suggested by @theiskaa and @Nikhil P K might allow for a successful connection to the RDS database but they potentially bypass the use of SSL, which is highly unadvisable in production environments.
To connect securely to your RDS database, follow these steps:
Modify your database connection config to include SSL:
const fs = require('fs');
const dbConfig = {
user: 'user',
host: 'host',
database: 'name',
password: 'password',
port: port,
ssl: {
require: true,
rejectUnauthorized: true,
ca: fs.readFileSync('/pathto/rds-ca-cert.pem').toString(),
}
};
Download the CA certificate bundle that matches your RDS instance region from the AWS RDS documentation; https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html#UsingWithRDS.SSL.RegionCertificates You can check which one you need in your AWS RDS console in the “Connectivity and settings” section, under the Certificate authority:
After downloading the CA certificate bundle, place it in your project directory.
Make sure that the path in the fs.readFileSync points to where you have the CA certificate within your project directory.
Now you should be able to connect securely by verifying the server certificate against the downloaded AWS RDS CA certificate
Note: I used Node.js in this example. If you are using a different environment or language, you need to adjust the syntax and method of reading the CA certificate file accordingly.
Upvotes: 33
Reputation: 1
I'm new to AWS, too. Ran into this problem when I used Beekeeper to test connection to a new Postgres db. AWS automatically provides a CA, so I didn't need to download the certificate. I did have to toggle the button "Enable SSL" when creating a new connection in Beekeeper and that worked for me.
Upvotes: 0
Reputation: 2080
In my use case, I saw this exact same error, while testing connectivity with a Postgres endpoint that didn't use encrypted connection.
Resolution: I modified the endpoint to require SSL. Just this, and it connected successfully.
Upvotes: 0
Reputation: 839
An alternative to the voted answer is to change the client instead of the server (rds) instance.
One way to do it, if you don't want to set up certificates, is to setting the sslMode to require
allow
or prefer
. See https://jdbc.postgresql.org/documentation/use/#connection-parameters and https://jdbc.postgresql.org/documentation/ssl/#configuring-the-client.
So your jdbc url would be: jdbc:postgresql://<host>/<db_name>?sslmode=prefer
Upvotes: 2
Reputation: 148
Just want to share my experience with this problem in the last couple of days
after trying all the suggested solutions the problem was wrong "Master username "
i have been copied the correct Master username from Configuration tab on aws RDS and it is successfully worked :)
Upvotes: 4
Reputation: 959
Piggybacking off of Nikhil P K's answer, this CDK code will turn SSL off for Postgres 15+:
const engine = DatabaseInstanceEngine.postgres({
version: PostgresEngineVersion.VER_15,
});
const parameterGroup = new ParameterGroup(
this,
"parameter-group",
{
engine,
parameters: {
"rds.force_ssl": "0",
},
}
);
this.database = new DatabaseInstance(this, "database", {
engine,
parameterGroup,
// ...the rest of the setup
});
Upvotes: 3
Reputation: 493
new Pool({
user: "",
password: "",
host: "",
database: "",
port: "",
ssl: {
rejectUnauthorized: false
}
})
For node Postgres
Upvotes: 22
Reputation: 2559
Okay, OP might have resolved the issue already but for others who come across this: ensure you have the AWS RDS CA certificates downloaded and double-check that your credentials are correct. In my case the Terraform module I used had manage_master_user_password
set to true by default which nicely silently discarded my provided password and set it on its own.
But basically, for the unaware, the steps to connect using SSL is:
publicly_accessible = true
in TF) or open to at least your VPC subnets with the proper ingress rules eg: ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
description = "PostgreSQL access for VPC and local machine"
cidr_blocks = [
module.vpc.vpc_cidr_block,
"1.1.1.1/32"
]
}
curl -O https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem
.postgresql
folder wherever Postgres wants it and move the certs there as root.crt
eg: mv global-bundle.pem /root/.postgresql/root.crt
psql postgresql://<your user>:<your pass>@postgres-1-project.z4dadjt5qxpn.us-east-1.rds.amazonaws.com:5432/<your db>?sslmode=verify-full
FATAL: password authentication failed for user
check your credentials again. SSL should work (or at least complain something) if you have it set to verify-full
Upvotes: 3