awanit
awanit

Reputation: 263

How to remove the redundant data from text file

I have calculate distance between two atoms and save in out.txt file. The generated out file is like this.

N_TYR_A0002      O_CYS_A0037     6.12    
O_CYS_A0037      N_TYR_A0002     6.12
N_ALA_A0001      O_TYR_A0002     5.34
O_TYR_A0002      N_ALA_A0001     5.34

My outfile has repeats, means same atoms and same distance.
How i can remove redundant line.

i used this program for distance calculation (all to all atoms)

from __future__ import division
from string import *
from numpy import *

def eudistance(c1,c2):
x_dist = (c1[0] - c2[0])**2
y_dist = (c1[1] - c2[1])**2
z_dist = (c1[2] - c2[2])**2
return math.sqrt (x_dist + y_dist + z_dist)

infile = open('file.pdb', 'r')
text = infile.read().split('\n')
infile.close()
text.remove('')

pdbid = []
#define the pdbid
spfcord = []
for g in pdbid:
ratom = g[0]
ratm1 = ratom.split('_')
ratm2 = ratm1[0]
if ratm2 in allatoms:
    spfcord.append(g)

#print spfcord[:10]

outfile1 = open('pairdistance.txt', 'w')
for m in spfcord:
name1 = m[0]
cord1 = m[1]
for n in spfcord:
    if n != '':
        name2 = n[0]
        cord2 = n[1]

        dist = euDist(cord1, cord2)
        if 7 > dist > 2:
            #print name1, '\t', name2, '\t', dist
            distances = name1 + '\t ' + name2 + '\t ' + str(dist)

            #print distances

            outfile1.write(distances)
            outfile1.write('\n')
outfile1.close()

Upvotes: 1

Views: 484

Answers (3)

AnishT
AnishT

Reputation: 356

Lets try to avoid generating the duplicates in the first place. Change this part of the code -

outfile1 = open('pairdistance.txt', 'w')
length = len(spfcord)
for i,m in enumerate(spfcord):
    name1 = m[0]
    cord1 = m[1]
    for n in islice(spfcord,i+1,length):

Add the import :

from itertools import islice

Upvotes: 0

Greg
Greg

Reputation: 12234

Okay, I have an idea. Not pretending it is the best or cleanest way but it was fun so..

import numpy as np
from StringIO import StringIO

data_in_file = """
N_TYR_A0002, O_CYS_A0037, 6.12    
N_ALA_A0001, O_TYR_A0002, 5.34
P_CUC_A0001, N_TYR_A0002, 9.56
O_TYR_A0002, N_ALA_A0001, 5.34
O_CYS_A0037, N_TYR_A0002, 6.12
N_TYR_A0002, P_CUC_A0001, 9.56
"""

# Import data using numpy, any method is okay really as we don't really on data being array's
data_in_array = np.genfromtxt(StringIO(data_in_file), delimiter=",", autostrip=True, 
                              dtype=[('atom_1', 'S12'), ('atom_2', 'S12'), ('distance', '<f8')])

N = len(data_in_array['distance'])

pairs = []

# For each item find the repeated index
for index, a1, a2 in zip(range(N), data_in_array['atom_1'], data_in_array['atom_2']):
    repeat_index = list((data_in_array['atom_2'] == a1) * (data_in_array['atom_1'] == a2)).index(True)
    pairs.append(sorted([index, repeat_index]))

# Each item is repeated, so sort and remove every other one
unique_indexs = [item[0] for item in sorted(pairs)[0:N:2]]

atom_1 = data_in_array['atom_1'][unique_indexs]
atom_2 = data_in_array['atom_2'][unique_indexs]
distance = data_in_array['distance'][unique_indexs]

for i in range(N/2):
    print atom_1[i], atom_2[i], distance[i]

#Prints
N_TYR_A0002 O_CYS_A0037 6.12
N_ALA_A0001 O_TYR_A0002 5.34
P_CUC_A0001 N_TYR_A0002 9.56

I should add that this assumes that every pair is repeated exactly once, and no item exists without a pair, this will break the code but could be handled by an error exception.

Note I also made changes to your input data file, using "," deliminators and added another pair to be sure the ordering wouldn't break the code.

Upvotes: 1

zero323
zero323

Reputation: 330323

If you don't care about order:

def remove_duplicates(input_file):
    with open(input_file) as fr:
        unique = {'\t'.join(sorted([a1, a2] + [d]))
            for a1, a2, d in  [line.strip().split() for line in fr]
        }

    for item in unique:
        yield item

if __name__ == '__main__':
    for line in remove_duplicates('out.txt'):
        print line

But simple check if name1 < name2 in your script before computing distance and writing data would be probably better.

Upvotes: 1

Related Questions