Reputation: 629
I have a crawler I created in AWS Glue that does not create a table in the Data Catalog after it successfully completes.
The crawler takes roughly 20 seconds to run and the logs show it successfully completed. CloudWatch log shows:
I am at a loss as to why the tables in the data catalog are not being created. AWS Docs are not of much help debugging.
Upvotes: 49
Views: 41200
Reputation: 6940
FWIW, I was trying to use a JDBC connection to an RDS instance as the source of my crawl. I was putting what I thought was a direct connection to the source table (e.g. postgres/table_name
). However, I forgot that the table was nested in the public
schema. Setting my source value to postgres/%
fixed the issue for me
Upvotes: 0
Reputation: 1
Encountered the same problem. I created a new crawler and a new IAM role but still used the same database and it worked!
Upvotes: 0
Reputation: 9508
In my case, the problem was in the setting Crawler source type > Repeat crawls of S3 data stores
, which I've set to Crawl new folders only
, because I thought it will crawl everything for the first run, and then continue to discover only new data.
After setting it to Crawl all folders
it discovered all tables.
Upvotes: 1
Reputation: 1727
I had a similar IAM issue as mentioned by Ray. But in my case, I did not add an asterisk (*) after the bucket name, which means the crawler did not go into the subfolders, and no table was created.
Wrong:
{
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-name"
]
}
],
"Version": "2012-10-17"
}
Correct:
{
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::bucket-name*"
]
}
],
"Version": "2012-10-17"
}
Upvotes: 2
Reputation: 1179
I had the same issue, as advised by others I tried to revise the existing IAM role, to include the new S3 bucket as the resource, but for some reason it did not work. Then I created a completely new role from scratch... this time it worked. Also, one big question I have for AWS is "why this access denied error due to a wrong attached IAM policy does not show up in Cloud watch log??" That makes it difficult to debug.
Upvotes: 6
Reputation: 1457
Here is my sample role JSON that allows glue to access s3 and create a table.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"ec2:DeleteTags",
"ec2:CreateTags"
],
"Resource": [
"arn:aws:ec2:*:*:instance/*",
"arn:aws:ec2:*:*:security-group/*",
"arn:aws:ec2:*:*:network-interface/*"
],
"Condition": {
"ForAllValues:StringEquals": {
"aws:TagKeys": "aws-glue-service-resource"
}
}
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"cloudwatch:PutMetricData",
"ec2:DeleteNetworkInterface",
"s3:ListBucket",
"s3:GetBucketAcl",
"logs:PutLogEvents",
"ec2:DescribeVpcAttribute",
"glue:*",
"ec2:DescribeSecurityGroups",
"ec2:CreateNetworkInterface",
"s3:GetObject",
"s3:PutObject",
"logs:CreateLogStream",
"s3:ListAllMyBuckets",
"ec2:DescribeNetworkInterfaces",
"logs:AssociateKmsKey",
"ec2:DescribeVpcEndpoints",
"iam:ListRolePolicies",
"s3:DeleteObject",
"ec2:DescribeSubnets",
"iam:GetRolePolicy",
"s3:GetBucketLocation",
"ec2:DescribeRouteTables"
],
"Resource": "*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "s3:CreateBucket",
"Resource": "arn:aws:s3:::aws-glue-*"
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": "logs:CreateLogGroup",
"Resource": "*"
}
]
}
Upvotes: 0
Reputation: 1398
You can try excluding some files in the s3 bucket, and those excluded files should appear in the log. I find it helpful in debugging what's happening with the crawler.
Upvotes: 1
Reputation: 161
If you have existing tables in the target database the crawler may associate your new files with the existing table rather than create a new one.
This occurs when there are similarities in the data or a folder structure that the Glue may interpret as partitioning.
Also on occasion I have needed to refresh the table listing of a database to get new ones to show up.
Upvotes: 2
Reputation: 611
check the IAM role associated with the crawler. Most likely you don't have correct permission.
When you create the crawler, if you choose to create an IAM role(the default setting), then it will create a policy for S3 object you specified only. if later you edit the crawler and change the S3 path only. The role associated with the crawler won't have permission to the new S3 path.
Upvotes: 61