Reputation: 320
Does anyone have a quick way of getting the line count of a file hosted in S3? Preferably using the CLI, s3api but I am open to python/boto as well. Note: solution must run non-interactively, ie in an overnight batch.
Right no i am doing this, it works but takes around 10 minutes for a 20GB file:
aws cp s3://foo/bar - | wc -l
Upvotes: 6
Views: 33145
Reputation: 269081
UPDATE 2024: Amazon S3 Select is no longer available to new users.
Here's two methods that might work for you...
Amazon S3 has a new feature called S3 Select that allows you to query files stored on S3.
You can perform a count of the number of records (lines) in a file and it can even work on GZIP files. Results may vary depending upon your file format.
Amazon Athena is also a similar option that might be suitable. It can query files stored in Amazon S3.
Upvotes: 13
Reputation: 81
Following aws cli s3-select query helped get number of lines in a csv file stored in s3
aws s3api select-object-content \
--bucket <s3-bucket-name> \
--key <s3-key> \
--expression "SELECT COUNT(*) FROM s3object" \
--expression-type 'SQL' \
--input-serialization '{"CSV": {"FileHeaderInfo": "Use"}}' \
--output-serialization '{"CSV": {"FieldDelimiter": ":"}}' /dev/stdout
output
53261928
Upvotes: 1
Reputation: 11
This answer here helped me to achieve faster results for all parquet files in a directory - https://donghao.org/2021/12/17/get-the-number-of-rows-for-a-parquet-file/
import pyarrow.parquet as pq
table = pq.read_table("my.parquet", columns=[])
print(table.num_rows)
Upvotes: 1
Reputation: 2375
You can do it using python/boto3. Define bucket_name and prefix:
colsep = ','
s3 = boto3.client('s3')
bucket_name = 'my-data-test'
s3_key = 'in/file.parquet'
Note that S3 SELECT can access only one file at a time.
Now you can open S3 SELECT cursor:
sql_stmt = """SELECT count(*) FROM s3object S"""
req_fact =s3.select_object_content(
Bucket = bucket_name,
Key = s3_key,
ExpressionType = 'SQL',
Expression = sql_stmt,
InputSerialization={'Parquet': {}},
OutputSerialization = {'CSV': {
'RecordDelimiter': os.linesep,
'FieldDelimiter': colsep}},
)
Now iterate thourgh returned records:
for event in req_fact['Payload']:
if 'Records' in event:
rr=event['Records']['Payload'].decode('utf-8')
for i, rec in enumerate(rr.split(linesep)):
if rec:
row=rec.split(colsep)
if row:
print('File line count:', row[0])
If you want to count records in all parquet files in a given S3 directory, check out this python/boto3 script: S3-parquet-files-row-counter
Upvotes: 3
Reputation: 31
Yes, Amazon S3 is having the SELECT feature, also keep an eye on the cost while executing any query from SELECT tab.. For example, here is the price @Jun2018 (This may varies) S3 Select pricing is based on the size of the input, the output, and the data transferred. Each query will cost 0.002 USD per GB scanned, plus 0.0007 USD per GB returned.
Upvotes: 3