Reputation: 121
I'm new in GCP and I'm trying to do a simple API with Cloud Functions. This API needs to read a CSV from Google Cloud Storage bucket and return a JSON. To do this, in my local I can run normally, open a file.
But in Cloud Functions, I received a blob from bucket, and don know how manipulate this, I'm receiving error
I try convert blob to Bytes and to string but i don't know exactly how do it
Code working in my local env:
data1 = '2019-08-20'
data1 = datetime.datetime.strptime(data1, '%Y-%m-%d')
data2 = '2019-11-21'
data2 = datetime.datetime.strptime(data2, '%Y-%m-%d')
with open("/home/thiago/mycsvexample.csv", "r") as fin:
#create a CSV dictionary reader object
print(type(fin))
csv_dreader = csv.DictReader(fin)
#iterate over all rows in CSV dict reader
for row in csv_dreader:
#check for invalid Date values
#convert date string to a date object
date = datetime.datetime.strptime(row['date'], '%Y-%m-%d')
#check if date falls within requested range
if date >= data1 and date <= data2:
total = total + float(row['total'])
print(total)
Code in Google Cloud Functions:
import csv, datetime
from google.cloud import storage
from io import BytesIO
def get_orders(request):
"""Responds to any HTTP request.
Args:
request (flask.Request): HTTP request object.
Returns:
The response text or any set of values that can be turned into a
Response object using
`make_response <http://flask.pocoo.org/docs/1.0/api/#flask.Flask.make_response>`.
"""
request_json = request.get_json()
if request.args and 'token' in request.args:
if request.args['token'] == 'mytoken888888':
client = storage.Client()
bucket = client.get_bucket('mybucketgoogle.appspot.com')
blob = bucket.get_blob('mycsvfile.csv')
byte_stream = BytesIO()
blob.download_to_file(byte_stream)
byte_stream.seek(0)
file = byte_stream
#with open(BytesIO(blob), "r") as fin:
#create a CSV dictionary reader object
csv_dreader = csv.DictReader(file)
#iterate over all rows in CSV dict reader
for row in csv_dreader:
#check for invalid Date values
date = datetime.datetime.strptime(row['date'], '%Y-%m-%d')
#check if date falls within requested range
if date >= datetime.datetime.strptime(request.args['start_date']) and date <= datetime.datetime.strptime(request.args['end_date']):
total = total + float(row['total'])
dict = {'total_faturado' : total}
return dict
else:
return f'Passe parametros corretos'
else:
return f'Passe parametros corretos'
Error in Google Cloud Functions:
Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 346, in run_http_function result = _function_handler.invoke_user_function(flask.request) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 217, in invoke_user_function return call_user_function(request_or_event) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 210, in call_user_function return self._user_function(request_or_event) File "/user_code/main.py", line 31, in get_orders_tramontina for row in csv_dreader: File "/opt/python3.7/lib/python3.7/csv.py", line 111, in __next__ self.fieldnames File "/opt/python3.7/lib/python3.7/csv.py", line 98, in fieldnames self._fieldnames = next(self.reader) _csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
I try do some other things but no sucess...
Someone can help me with this blob, to convert this or manipulate with the right way?
Thank you all
Upvotes: 1
Views: 6561
Reputation: 121
I'm able to do this too using a library gcsfs
https://gcsfs.readthedocs.io/en/latest/
def get_orders_tramontina(request):
"""Responds to any HTTP request.
Args:
request (flask.Request): HTTP request object.
Returns:
The response text or any set of values that can be turned into a
Response object using
`make_response <http://flask.pocoo.org/docs/1.0/api/#flask.Flask.make_response>`.
"""
request_json = request.get_json()
if request.args and 'token' in request.args:
if request.args['token'] == 'mytoken':
fs = gcsfs.GCSFileSystem(project='myproject')
total = 0
with fs.open('mybucket.appspot.com/mycsv.csv', "r") as fin:
csv_dreader = csv.DictReader(fin)
#iterate over all rows in CSV dict reader
for row in csv_dreader:
#check for invalid Date values
date = datetime.datetime.strptime(row['date'], '%Y-%m-%d')
#check if date falls within requested range
if date >= datetime.datetime.strptime(request.args['start_date'], '%Y-%m-%d') and date <= datetime.datetime.strptime(request.args['end_date'], '%Y-%m-%d'):
total = total + float(row['total'])
dict = {'total_faturado' : total}
return json.dumps(dict)```
Upvotes: 1
Reputation: 8056
This is the code that worked for me:
from google.cloud import storage
import csv
client = storage.Client()
bucket = client.get_bucket('source')
blob = bucket.blob('file')
dest_file = '/tmp/file.csv'
blob.download_to_filename(dest_file)
dict = {}
total = 0
with open(dest_file) as fh:
# assuming your csv is del by comma
rd = csv.DictReader(fh, delimiter=',')
for row in rd:
date = datetime.datetime.strptime(row['date'], '%Y-%m-%d')
#check if date falls within requested range
if date >= datetime.datetime.strptime(request.args['start_date']) and date <= datetime.datetime.strptime(request.args['end_date']):
total = total + float(row['total'])
dict['total_faturado'] = total
Upvotes: 2
Reputation: 2990
Try to download file as string, that way you can check for invalid data values, and eventually write that to a file.
change blob.download_to_file(byte_stream)
to my_blob_str = blob.download_as_string()
I think your actual problem is byte_stream = BytesIO()
since your output reads iterator should return strings, not bytes (did you open the file in text mode?)
It is expecting a string, but gets bytes. What is the purpose of byte_stream
? If random, just remove it.
Upvotes: 0