Reputation: 1
i have >100 .cwa files (raw accelerometery data) and have a .py programme which processes the .cwa data and produces a .csv file. How do i write a script in python which can process all the .cwa files with my .py without having to individually insert the filename into the command line.
i am new to programming, i have looked at glob.glob and os. walk but don't understand how to use these process multiple files. i want to read in 1 x .cwa file at a time, process it using my .py and then move onto the next .cwa until all have been read and there are the corresponding number of csv files as an output. i do not want to merge the data into one file.
Upvotes: 0
Views: 922
Reputation: 546
You can use shell scripting (here is a bash example for Linux) to list files in a directory and call a command for each file:
find /path/to/cwa/files/ -type f -name "*.cwa" -exec /path/to/script.py {} \;
This example assumes that script.py takes a single argument which is a CWA filepath, e.g. script.py /my/cwa/file.cwa
If you want to stick with Python, a for-loop with os.listdir
might be the easiest approach. Let's assume that your CWA-processing code is in the process_cwa()
function:
import os
CWA_DIR = '/path/to/cwa/files'
def process_cwa(path):
# process CWA file
pass
# process CWA files from CWA_DIR on disk
for file in os.listdir(CWA_DIR):
if file.endswith(".cwa"):
path = os.path.join(CWA_DIR, file)
process_cwa(path)
Upvotes: 1