user1700959
user1700959

Reputation: 53

Efficient way to organise data file in columns with Python

I'm getting an output data file of a program which looks like this, with more than one line for each time step:

0.0000E+00   0.0000E+00   0.0000E+00   0.0000E+00   0.0000E+00   0.0000E+00 \n   0.0000E+00   0.0000E+00   0.0000E+00   0.0000E+00
7.9819E-06   1.7724E-02   2.3383E-02   3.0048E-02   3.8603E-02   4.9581E-02 \n  5.6635E-02   4.9991E-02   3.9052E-02   3.0399E-02
....

I want to arrange it in ten columns

I have made a Python script, using regular expressions to delete \n in the proper lines, but I think that there should be a simpler more elegant way to do it, here is my script:

import re

with open('inputfile', encoding='utf-8') as file1:
       datai=file1.read()

dataf=re.sub(r'(?P<nomb>(   \d\.\d\d\d\dE.\d\d){8})\n','\g<nomb>',datai)

with open('result.txt',mode='w',encoding='utf-8') as resultfile:
        resultfile.write(datof)

Upvotes: 3

Views: 364

Answers (4)

Pierre GM
Pierre GM

Reputation: 20339

You could try a simple

single_list = []
with open(your_file) as f:
    for line in f.readlines():
        single_list.extend(line.rstrip().split())

list_of_rows = [single_list[i*10:i*10+10] for i in range(len(single_list)//10)]

with open(output_file) as f:
    for line in list_of_rows:
        f.write(' '.join(line) + '\n')

If all your data can be read as a single string (with your data = f.read()), you could also:

merged_data = data.replace("\n", " ")
single_list = merged_data.split()

and use single_list as described above.


If the input file is large and creating temporary lists a memory issue, you could try something like:

    with open(input_file,'r') as inpf, open(output_file,'w') as outf:
        writable = []
        for line in input_file:
            row = line.rstrip().split()
            writable.extend(row)
            while len(writable) >= 10:
                outf.write(" ".join(writable[:10]) + "\n")
                writable = writable[10:]

Upvotes: 2

LarsVegas
LarsVegas

Reputation: 6812

You could create a dictionary to store the data in a column like structure:

with open('inputfile', encoding='utf-8') as file1:
      in_f=file1.readlines()
arr = [line.strip().split('   ') for line in in_f] # or is it a tab that separates the  values?
# create an empty dict
db = {}

# use the index of the elements as a key
for i in range(len(arr[0])):
    db[i]=[]

# loop through first through the lists, then 
# iterate over the elements... 
for line in arr:
    for i,element in enumerate(line):
        db[i].append(element)

output:

>>> db {0: ['0.0000E+00', '7.9819E-06'], 1: ['0.0000E+00', '1.7724E-02'], 2: ['0.0000E+00','2.3383E-02'], 3: ['0.0000E+00', '3.0048E-02'], 4: ['0.0000E+00', '3.8603E-02'], 5: ['0.0000E+00', '4.9581E-02'], 6: ['0.0000E+00', '5.6635E-02'], 7: ['0.0000E+00', '4.9991E-02'], 8: ['0.0000E+00', '3.9052E-02'], 9: ['0.0000E+00', '3.0399E-02']}

Upvotes: 1

tarrasch
tarrasch

Reputation: 2680

the most simple solution i can think of is just use numpy:

file = np.genfromtxt('file',unpack=True,names=True,dtype=None)

what you get is a dictionary that you acess with

print file[1][1] 

or if you have headers, use those:

print file['header']

Upvotes: 1

Ber
Ber

Reputation: 41813

You could use split() on each line (or group of liens) to generate a list of strings containing one number each, the use <string>.join(<list_of_numbers>) to join them into a new line.

Upvotes: 1

Related Questions