maltman
maltman

Reputation: 454

Parse files in AWS S3 with boto3

I am attempting to read files from my S3 bucket, and parse them with a regex pattern. However, I have not been able to figure out to read the files line by line. Is there a way to do this or a different way I need to be approaching this for parsing?

pattern = '^(19|20)\d\d[-.](0[1-9]|1[012])[-.](0[1-9]|[12][0-9]|3[01])[ \t]+([0-9]|0[0-9]|1[0-9]|2[0-3]):[0-5][0-9]:[0-5][0-9][ \t]+(?:[0-9]{1,3}\.){3}[0-9]{1,3}[ \t]+(?:GET|POST|PUT)[ \t]+([^\s]+)[ \t]+[1-5][0-9][0-9][ \t]+(\d+)[ \t]+(\d+)[ \t]+"(?:[^"\\]|\\.)*"[ \t]+"(?:[^"\\]|\\.)*"[ \t]+"(?:[^"\\]|\\.)*"'

s3 = session.resource('s3')
bucket_name = s3.Bucket(bucket)
data = [obj for obj in list(bucket_name.objects.filter(Prefix=prefix)) if obj.key != prefix]

for obj in data:
    key = obj.key
    body = obj.get()['Body'].read()
    print(key)
    print(body)
    for line in body:
        print(line)

So I am able to see the correct file and able to read the whole body of the file (close to an IIS log). However when I try to iterate the lines, I get numbers. So the output of print(line) is

35
101
119
147
etc.

I have no idea where these numbers are coming from. Are they words, characters, something else?

My goal is to apply my pattern once I am able to read the file line by line with the regular expression operator.

EDIT: Here is one of my log lines

2016-06-14  14:03:42    1.1.1.1 GET /origin/ScriptResource.axd?=5f9d5645    200 26222   0   "site.com/en-US/CategoryPage.aspx"  "Mozilla/5.0 (Linux; Android 4.4.4; SM-G318HZ Build/KTU84P)"    "ASP.NET_SessionId=emfyTVRJNqgijw=; __SessionCookie=bQMfQzEtcnfMSQ==; __CSARedirectTags=ABOcOxWK/O5Rw==; dtCookie=B52435A514751459148783108ADF35D5|VVMrZVN1aXRlK1BXU3wx"

Upvotes: 4

Views: 5520

Answers (1)

Dinesh Pundkar
Dinesh Pundkar

Reputation: 4196

Text file with below content I have used in below solution:

I love AWS.
I love boto3.
I love boto2.

I think the problem is with line :

for line in body:

It iterates character by character instead of line by line.

C:\Users\Administrator\Desktop>python bt.py
I

l
o
v
e

A
W
S
.



I

l
o
v
e

b
o
t
o
3
.



I

l
o
v
e

b
o
t
o
2
.

C:\Users\Administrator\Desktop>

Instead we use as below :

for line in body.splitlines():

then the output looks like this

C:\Users\Administrator\Desktop>python bt.py
I love AWS.
I love boto3.
I love boto2.

C:\Users\Administrator\Desktop>

Applying above things, I tried the below code on text file with small regex which will give boto versions from the file.

import re
header = ['Date', 'time', 'IP', 'method', 'request', 'status code', 'bytes', 'time taken', 'referrer', 'user agent', 'cookie']
s3 = session.resource('s3')
bucket_name = s3.Bucket(bucket)
data = [obj for obj in list(bucket_name.objects.filter(Prefix=prefix)) if obj.key != prefix]

for obj in data:
    key = obj.key
    body = obj.get()['Body'].read()
    #print(key)
    #print(body)
    x=0
    for line in body:
        m = re.search(r'(\d{4}-\d{2}-\d{2})\s+(\d{2}:\d{2}:\d{2})\s+([\d\.]+)\s+([GET|PUT|POST]+)\s+([\S]+)\s+(\d+)\s+(\d+)\s+(\d+)\s+([\S]+)\s+(\".*?\")\s+(.*)',line)
        if m is not None:
            for i in range(11):
                print header[i]," - ",m.group(x)
                x+=1
        print "------------------------------------"  
        x=0

Upvotes: 4

Related Questions