chazzmoney
chazzmoney

Reputation: 231

AWS Glue Crawler adding tables for every partition?

I have several thousand files in an S3 bucket in this form:

├── bucket
│   ├── somedata
│   │   ├── year=2016
│   │   ├── year=2017
│   │   │   ├── month=11
│   │   |   │   ├── sometype-2017-11-01.parquet
│   |   |   |   ├── sometype-2017-11-02.parquet
│   |   |   |   ├── ...
│   │   │   ├── month=12
│   │   |   │   ├── sometype-2017-12-01.parquet
│   |   |   |   ├── sometype-2017-12-02.parquet
│   |   |   |   ├── ...
│   │   ├── year=2018
│   │   │   ├── month=01
│   │   |   │   ├── sometype-2018-01-01.parquet
│   |   |   |   ├── sometype-2018-01-02.parquet
│   |   |   |   ├── ...
│   ├── moredata
│   │   ├── year=2017
│   │   │   ├── month=11
│   │   |   │   ├── moretype-2017-11-01.parquet
│   |   |   |   ├── moretype-2017-11-02.parquet
│   |   |   |   ├── ...
│   │   ├── year=...

etc

Expected behavior: The AWS Glue Crawler creates one table for each of somedata, moredata, etc. It creates partitions for each table based on the childrens' path names.

Actual Behavior: The AWS Glue Crawler performs the behavior above, but ALSO creates a separate table for every partition of the data, resulting in several hundred extraneous tables (and more extraneous tables which every data add + new crawl).

I see no place to be able to set something or otherwise prevent this from happening... Does anyone have advice on the best way to prevent these unnecessary tables from being created?

Upvotes: 15

Views: 11136

Answers (6)

peralmq
peralmq

Reputation: 2348

Use the Create a Single Schema for Each Amazon S3 Include Path option to avoid the AWS Glue Crawler adding all these extra tables.

I had this problem and ended up with ~7k tables 😅 so wrote the following script to remove them. It requires jq.

#!/bin/sh
aws glue get-tables --region <YOUR AWS REGION> --database-name <YOUR AWS GLUE DATABASE> | jq '.TableList[] | .Name' | grep <A PATTERN THAT MATCHES YOUR TABLENAMEs> > /tmp/table-names.json 
cd /tmp
mkdir table-names
cd table-names
split -l 50 ../table-names.json 
for f in `ls`; cat $f | tr '\r\n' ' ' | xargs aws glue batch-delete-table --region <YOUR AWS REGION> --database-name <YOUR AWS GLUE DATABASE> --tables-to-delete;

Upvotes: 6

Thiago Baldim
Thiago Baldim

Reputation: 7742

So, my case was a little bit different and I was having the same behaviour.

I got a data structure like this:

├── bucket
│   ├── somedata
│   │   ├── event_date=2016-01-01
│   │   ├── event_date=2016-01-02

So when I started AWS Glue Crawler instead of update the tables, this pipeline was creating a one table per date. After digging into the problem I found that someone added a column as a bug at the json file instead of id was ID. Because my data is parquet the pipeline was working well to store the data and retrieve inside the EMR. But Glue was crashing pretty bad because Glue convert everything to lowercase and probably that was the reason why it was crashing. Removing the uppercase column glue start to work like a charm.

Upvotes: 2

Bob McCormick
Bob McCormick

Reputation: 225

You need to have separate crawlers for each table / file type. So create one crawler that looks at s3://bucket/somedata/ and a 2nd crawler that looks at s3://bucket/moredata/.

Upvotes: -6

cozyss
cozyss

Reputation: 1398

check if you have empty folders inside. When spark writes to S3, sometimes, the _temporary folder is not deleted, which will make Glue crawler create table for each partition.

Upvotes: 3

jaketbouma
jaketbouma

Reputation: 141

Adding to the excludes

  • **_SUCCESS
  • **crc

worked for me (see aws page glue/add-crawler). Double stars match the files at all folder (ie partition) depths. I had an _SUCCESS living a few levels up.

Make sure you set up logging for glue, which quickly points out permission errors etc.

Upvotes: 7

Ricardo
Ricardo

Reputation: 195

I was having the same problem. I added *crc* as exclude pattern to the AWS Glue crawler and it worked. Or if you crawl entire directories add */*crc*.

Upvotes: 2

Related Questions