Reputation: 737
Trying to implement this code and keep getting the following error. I have searched and tried everything and nothing gets me past it.
import json
import sys
import re
import os
reload(sys)
sys.setdefaultencoding('utf8')
path = '/Users/.../' #(The actual path is in my code)
textfiles = []
for root, dirs, files in os.walk(r'/Users/.../'):
for file in files:
if file.endswith(".txt"):
textfiles.append(file)
for filename in textfiles:
with open(path + filename) as json_data:
data = json.load(json_data)
opinion = data['plain_text']
f = open(path + filename, 'w')
f.write(opinion)
f.close()
Then, I keep getting this error:
ValueError: Expecting , delimiter: line 17 column 3 (char 765)
Upvotes: 1
Views: 1029
Reputation: 2211
Obviously your problem lies within the JSON files.. But which one?? where??
If you put in a try/except
clause, you can log and see which files are causing the problem so you can debug a little easier.
import json
import sys
import re
import os
reload(sys)
sys.setdefaultencoding('utf8')
path = '/Users/.../' (The actual path is in my code)
textfiles = []
for root, dirs, files in os.walk(r'/Users/.../'):
for file in files:
if file.endswith(".txt"):
textfiles.append(file)
for filename in textfiles:
try:
with open(path + filename) as json_data:
data = json.load(json_data)
opinion = data['plain_text']
f = open(path + filename, 'w')
f.write(opinion)
f.close()
except ValueError as e:
print(filename) # Gives the filename
print(e) # Gives the location of the problem in the file
Also it should continue to iterate through the files and provide errors on the ones that are problematic.
You can even save to a log.txt for processing later..
here is a little example
lst = [1, 2, '3', 4, 5]
for i in lst:
try:
l = 123 + i
print(l)
except Exception as e:
print(e)
output:
124
125
unsupported operand type(s) for +: 'int' and 'str'
127
128
Upvotes: 2
Reputation: 1425
It's saying there's a missing comma in line 17 in one of your JSON files. Which one? There should have been more context in the error, but if not you should try to isolate it by processing smaller batches of files.
Also, why would you have 700+ characters in the JSON data?
Upvotes: 1