Reputation: 411
Hello I'm looking for the best way to automatize my problem: I am working on a web application in which I use JSON translation files that have this form:
{ "unique_key" : "value"}
I have several files, one for each supported language, which all have the same number of items.
Ex :
i18n_en.json
{ "greeting" : "Hello"}
i18n_fr.json
{ "greeting" : "Bonjour"}
I have very badly managed the evolution of these files, and I end up with keys that are no longer used (I easily think 30% of the ~500 keys), the problem being that I don't know which ones. And that I would have to manually search through the entire architecture of my application to find those that are used and redo a clean file.
My idea to automate this process being:
I don't really know which language to use that would be optimized for this kind of task, thank you for guiding me!
Upvotes: 0
Views: 1615
Reputation: 411
I ended up creating my own shell script :
path_to_project=path/to/my/project
RED='\033[0;31m'
GREEN='\033[0;32m'
rm temp.json
rm final.json
touch temp.json
touch final.json
echo "{" >> temp.json
while IFS=, read -r key value
do
if grep -r -q --include=\*.{js,html} --exclude-dir={node_modules,bower_components} $key $path_to_project; then
# write in new json
echo "\"$key\":\"$value\"," >> temp.json
echo -e "${GREEN} $key was found !"
else
echo -e "${RED} $key not found"
fi
done < data.csv
echo "}" >> temp.json
#remove new lines
cat temp.json | tr -d '\r' >> final.json
For this to work, I had to convert my json file to csv (data.csv). The final json file needs to be reworked a little manually after the script, but really nothing overwhelming.
Upvotes: 2
Reputation: 143
Maybe something like i18next-scanner will be useful. You are not the first with such a problem.
Upvotes: 2