Reputation: 353
I am attempting to fit a JSON file into a dataframe. My presently bugged code creates the JSON file through the following method:
fname = 'python.json'
with open(fname, 'r') as f, open('sentiment.json', 'w') as s:
for line in f:
tweet = json.loads(line)
# Create a list with all the terms
tweet_words = tweet['text']
output = subprocess.check_output(['curl', '-d', "text=" + tweet_words.encode('utf-8'), 'http://text-processing.com/api/sentiment/'])
s.write(output+"\n")
It writes into 'sentiment.json' output requested from the text-processing.com API. I then load the JSON using:
def load_json(file, skip):
with open(file, 'r') as f:
read = f.readlines()
json_data = (json.loads(line) for i, line in enumerate(read) if i%skip==0)
return json_data
And then construct the dataframe using:
sentiment_df = load_json('sentiments.json', 1)
data = {'positive': [], 'negative': [], 'neutral': []}
for s in sentiment_df:
data['positive'].append(s['probability']['pos'])
data['negative'].append(s['probability']['neg'])
data['neutral'].append(s['probability']['neutral'])
df = pd.DataFrame(data)
Error: ValueError: No JSON object could be decoded
I browsed through several related questions, and based on the answer here, from WoodrowShigeru, I suspect it may have something to do with my encoding into 'utf-8' in the first block of code.
Does anyone know a good fix? Or at least provide some directions? Thanks guys!
Edit 1
Upvotes: 0
Views: 1321
Reputation: 107767
Your screenshot is not a valid json as a container must hold all comma-separated line items. However, the challenge is your command line call returns a string, output
, that you then write to text file. You need to create a list of dictionaries that is then dumped to a json file with json.dumps()
.
Consider doing so by casting command line string into a dictionary with ast.literal_eval()
during the first text file read. Then append each dictionary to a list:
import ast
fname = 'python.json'
dictList = []
with open(fname, 'r') as f, open('sentiment.json', 'w') as s:
for line in f:
tweet = json.loads(line)
# Create a list with all the terms
tweet_words = tweet['text']
output = subprocess.check_output(['curl', '-d', "text=" + tweet_words.encode('utf-8'),
'http://text-processing.com/api/sentiment/'])
# CONVERT STRING TO DICT AND APPEND TO LIST
dictList.append(ast.literal_eval(output))
# CONVERT TO JSON AND WRITE TO FILE
s.write(json.dumps(dictList, indent=4))
From there, read json file into a pandas dataframe with json_normalize. Below uses example data:
import json
import pandas as pd
with open('sentiment.json') as f:
data = json.load(f)
df = pd.io.json.json_normalize(data)
df.columns = [c.replace('probability.', '') for c in df.columns]
print(df)
# label neg neutral pos
# 0 pos 0.003228 0.204509 0.571945
# 1 pos 0.053094 0.097317 0.912760
# 2 pos 0.954958 0.163341 0.917178
# 3 pos 0.784391 0.646188 0.955281
# 4 pos 0.203419 0.050908 0.490738
# 5 neg 0.122760 0.705633 0.219701
# 6 neg 0.961012 0.923886 0.335999
# 7 neg 0.368639 0.562720 0.124530
# 8 neg 0.566386 0.802366 0.825956
# 9 neg 0.115536 0.512605 0.784626
# 10 neutral 0.202092 0.741778 0.567957
# 11 neutral 0.837179 0.051033 0.509777
# 12 neutral 0.333542 0.085449 0.610222
# 13 neutral 0.798188 0.248258 0.218591
# 14 neutral 0.873109 0.469737 0.005178
# 15 pos 0.916112 0.313960 0.750118
# 16 neg 0.810080 0.852236 0.212373
# 17 neutral 0.748280 0.039534 0.323145
# 18 pos 0.274492 0.461644 0.984955
# 19 neg 0.063772 0.793171 0.631172
Upvotes: 1