Reputation: 59
import boto3
import pandas as pd
BUCKET_NAME = ''
ACCESS_KEY_ID = ''
ACCESS_SECRET_KEY = ''
Fraudfilekey = 'fraud_CT_ID_IM_NO/ CT_PROFILE_One_to_Many_Mapping /yyyy=2021/mm=02/dd=05/2021_02_05_CT_TEST.csv'
d = {"A" : ["John","Deep","Julia","Kate","Sandy"],
"MonthSales" : [25,30,35,40,45]}
df = pd.DataFrame(d)
s3 = boto3.client('s3', region_name='ap-south-1', aws_access_key_id=ACCESS_KEY_ID,
aws_secret_access_key=ACCESS_SECRET_KEY)
def write_to_s3_oneim_to_onect(df):
s3.put_object(Body=df, Bucket=BUCKET_NAME, Key=Fraudfilekey)
write_to_s3_oneim_to_onect(df)
How can I write dictionary value directly to s3 bucket , I get Body below error raise ParamValidationError(report=report.generate_report()) botocore.exceptions.ParamValidationError: Parameter validation failed: Invalid type for parameter Body, value: A MonthSales
Note: I want Headers col as IM No, CT ID in csv file
Upvotes: 2
Views: 1162
Reputation: 238111
There are few ways. One would be to use BytesIO
as a memory buffer for the file:
import io
def write_to_s3_oneim_to_onect(df):
bytes_io = io.BytesIO()
df.to_csv(bytes_io)
s3.put_object(Body=bytes_io.getvalue(),
Bucket=BUCKET_NAME,
Key=Fraudfilekey)
Other would be to use s3fs which pandas supports. This would require you to install s3fs and setup AWS credetnails for it to use. But once setup, writing to S3 would be:
def write_to_s3_oneim_to_onect(df):
df.to_csv(f"s3://{BUCKET_NAME}/{Fraudfilekey}")
Upvotes: 3