Reputation:
First of all, we have two files:
file01.txt
101|10075.0|12|24/12/2015
102|1083.33|12|24/12/2015
The second file has only one line!
file02.txt
101|False|Section06
The first parameter is th same in both files (unique). I must replace data file01 by some from file02. Match criterion is the first parameter (code).
I have one input (request for code) and readlines for both file what next I need to do Also I'm working with lists.
Expected result:
input = 101
The output should be:
101|False|Section06
102|1083.33|12|24/12/2015
Upvotes: 1
Views: 59
Reputation: 85462
This works for the given example:
with open('file01.txt') as fobj1, open('file02.txt') as fobj2:
data1 = fobj1.readlines()
data2 = fobj2.readline()
code = data2.split('|', 1)[0]
with open('file01.txt', 'w') as fobj_out:
for line in data1:
if line.split('|', 1)[0] == code:
fobj_out.write(data2 + '\n')
else:
fobj_out.write(line)
We open both files for reading:
with open('file01.txt') as fobj1, open('file02.txt') as fobj2:
data1 = fobj1.readlines()
data2 = fobj2.readline()
The read data looks like this:
>> data1
['101|10075.0|12|24/12/2015\n', '102|1083.33|12|24/12/2015']
>> data2
'101|False|Section06'
We only need the code from file02.txt
:
>>> code = data2.split('|', 1)[0]
code
'101'
The data2.split('|', 1)
splits at |
. Since we need only one split, we can limit it with 1
.
Now we open file01.txt
again. This time for writing:
with open('file01.txt', 'w') as fobj_out:
for line in data1:
if line.split('|', 1)[0] == code:
fobj_out.write(data2 + '\n')
else:
fobj_out.write(line)
This line if line.split('|', 1)[0] == code:
does the same split as above but for all lines of file01.txt
. If the code is equal to the one from file02.txt
, we use the line from file02.txt
, otherwise we just write the line form file01.txt
back.
Upvotes: 1
Reputation: 4060
You can simply concatenate the two sets of data into a single pandas.DataFrame()
, as follows:
import pandas as pd
df1 = pd.DataFrame([[10075.0, 12,'24/12/2015'], [1083.33, 12, '24/12/2015']], index=[101,102], columns=['prc', 'code', 'date'])
'''
101|10075.0|12|24/12/2015
102|1083.33|12|24/12/2015
'''
df2 = pd.DataFrame([[False, 'Section06'], [True, 'Section07']], index=[101,102], columns=['Bool', 'Section'])
'''
101|False|Section06
102|True|Section07
'''
pd.concat([df1,df2], axis=1, join='outer')
Which gives:
prc code date Bool Section
101 10075.00 12 24/12/2015 False Section06
102 1083.33 12 24/12/2015 True Section07
Now you can get rid of the columns you don't need (eg using pandas.Drop()
)
Upvotes: 0
Reputation: 22282
You could use csv.reader()
to read the file, and put them in a dict, then replace the keys like this:
import csv
with open('file1') as f:
d = {i[0]: i[1:] for i in csv.reader(f, delimiter='|')}
with open('file2') as f:
d.update({i[0]: i[1:] for i in csv.reader(f, delimiter='|')})
And d
looks like:
{'101': ['False', 'Section06'], '102': ['1083.33', '12', '24/12/2015']}
To get the excepted output:
>>> ['|'.join([i[0]]+i[1]) for i in d.items()]
['101|False|Section06', '102|1083.33|12|24/12/2015']
And if you want write them into a file:
with open('file1', 'w') as f:
for i in d.items():
f.write('|'.join([i[0]]+i[1]))
Upvotes: 2