dportman
dportman

Reputation: 1109

Kafka to Pandas dataframe without Spark

I am reading streaming data from a kafka topic and I want to store some parts of it in a pandas dataframe.

from confluent_kafka import Consumer, KafkaError

c = Consumer({
    'bootstrap.servers': "###",
    'group.id': '###',
    'default.topic.config': {
'auto.offset.reset': 'latest' }
})

c.subscribe(['scorestore'])

while True:
    msg = c.poll(1.0)

    if msg is None:
        continue
    if msg.error():
        if msg.error().code() == KafkaError._PARTITION_EOF:
            continue
        else:
            print(msg.error())
            break

    print('Received message: {}'.format(msg.value().decode('utf-8')))

c.close()

The received message is a json

{
  "messageHeader" : {
    "messageId" : "4b604b33-7256-47b6-89d6-eb1d92a282e6",
    "timestamp" : 152520000,
    "sourceHost" : "test",
    "sourceLocation" : "test",
    "tags" : [ ],
    "version" : "1.0"
  },
  "id_value" : {
    "id" : "1234",
    "value" : "333.0"
  }
}

I am trying to create a dataframe that will have the timestamp, id and value columns, for example

    timestamp   id  value
0   152520000   1234    333.0

Is there a way to accomplish this without parsing the json message and appending the values I need row by row to the dataframe?

Upvotes: 5

Views: 8282

Answers (1)

migjimen
migjimen

Reputation: 571

The solution that I proppose may be a little tricky. Imagine you have your JSON message in a string named 'msg_str':

import pandas as pd

msg_str = '{  "messageHeader" : { "messageId" : "4b604b33-7256-47b6-89d6-eb1d92a282e6",    "timestamp" : 152520000,    "sourceHost" : "test",    "sourceLocation" : "test",    "tags" : [ ],    "version" : "1.0"  },  "id_value" : {    "id" : "1234",    "value" : "333.0"  }}'


#first create a dataframe with read_json
p = pd.read_json(msg_str)
# Now you have a dataframe with two columns. Where a column has a value, the other 
# has a NaN. Now create a new column only with the values which are not 'NaN'
p['fussion'] = p['id_value'].fillna(p['messageHeader'])
# Delete columns 'id_value' and 'messageHeader' as you don't need them anymore
p = p[['fussion']].reset_index()
# Create a temporal column only to be the index to do a pivot
p['tmp'] = 0
# Do the pivot to convert rows into columns
p = p.pivot(index = 'tmp' ,values='fussion', columns='index')
# Finally get the columns that you are interested in
p = p.reset_index()[['timestamp','id','value']]

print(p)

Result:

index  timestamp    id value
0      152520000  1234   333

Then you can append this dataframe to a dataframe where you are accumulating your results.

Maybe there is a simplest solution, but I hope it helps you if it wasn't the case.

Upvotes: 2

Related Questions