Reputation: 1
Hi all can help to enhanced my python code for creating JSON files. I want to create loop that creates json (different json for each of the folder) from the csv files save in different folder and and then save all the json in the one common folder. Right now I am using below code and changing the path last two digits manually to create json one by one that is very tedious task.
Or is possible to create same task in R?
import csv
import json
import glob
import os
class csv2jsonindirectory():
def Python_trial(self):
# Update the following variable with the path in windows and replace
# every "\" with "/".
path_to_folder = "C:\\Users\\CSVs\\AO"
csv_files_in_folder = path_to_folder + '/*.csv'
csvfilenames = []
i = 1
mydict = {}
for filename in glob.glob(csv_files_in_folder):
csvfilenames.append(os.path.splitext(filename)[0])
rows = []
for i in range(len(csvfilenames)):
with open(csvfilenames[i] + ".csv", "r") as f:
csvreader = csv.DictReader(f)
rows = list(csvreader)
mydict["chartdiv" + str(i + 1)] = rows
print(mydict)
with open(csvfilenames[0] + ".json", 'w') as f:
json.dump(mydict, f, indent= 4)
dd = csv2jsonindirectory()
dd.Python_trial()
Upvotes: 0
Views: 394
Reputation: 4418
Break it down into smaller problems.
I'm going to drop the class, because it doesn't seem to be needed. Also, I'm going to use pathlib
instead of os.path
.
import csv
import json
import pathlib
First, define a function that takes a path to the csv file and returns the contents as a list of dicts:
def load_csv(filepath):
with filepath.open("r") as f:
csvreader = csv.DictReader(f)
return list(csvreader)
Now define a function that takes a path to a directory and loads all the csv files in that directory.
def load_all_csv_in_dir(dirpath):
csv_map = {}
for counter, filepath in enumerate(dirpath.glob("*.csv"), 1):
csv_map[f"chartdiv{counter}"] = load_csv(filepath)
return csv_map
Define a function that takes a directory; gathers all the csv files for each subdirectory; and saves them to a json file.
def gather_csv_from_subdirs(rootpath):
for path in rootpath.iterdir():
if not path.is_dir():
continue
csv_map = load_all_csv_in_dir(path)
if not csv_map:
continue
jsonfile = rootpath / f"{path.parts[-1]}.json"
with jsonfile.open("w") as f:
json.dump(csv_map, f, indent=4)
Lastly,
gather_csv_from_subdirs(pathlib.Path("C:/Users/CSVs"))
Upvotes: 1