San
San

Reputation: 1837

Reading EDI files and writing into new file

I have a big text file (which has about 20k lines) using which I need to replace some lines of text in other text files (about 60-70 of them). The other files can be called as templates. The lines in these templates needs to replaced based on some conditions Sample content of the file:

ISA*00*          *00*          *01*000123456      *ZZ*PARTNERID~       *090827*0936*U*00401*000000055*0*T*>~      
GS*PO*000123456*PARTNERID*20090827*1041*2*X*004010~  
ST*850*0003~  
BEG*00*SA*1000012**20090827~  
REF*SR*N~  
CSH*Y~  
TD5*****UPSG~  
N1*ST*John Doe~  
N3*126 Any St*~  
N4*Hauppauge*NY*11788-1234*US~  
PO1*1*1*EA*19.95**VN*0054321~  
CTT*1*1~  
SE*11*0003~  
GE*1*2~  
IEA*1*000000001~ 

I am loading the filestream from the content file as below and reading it using stream reader.

FileStream baseFileStream= new FileStream("C:\\Content.txt", FileMode.Open);

Then I need to loop through the template files in a folder one by one. Once I pick a template file I will load into another FileStream(templates at max will have 300 lines ).

While reading the file I will have to go back to the previous lines numerous times. But if I read the files using ReadToEnd() or ReadLine() methods going back to the previous lines will not be possible. To overcome this I am reading the template into collection of Lines. But will it be a good idea to read the Content file into a collection as it’s very huge. There will be a lot of searching involved in this file.Will buffered stream be of any use here?

Or is there any better approach for this?

Thanks

Upvotes: 0

Views: 888

Answers (1)

Matt Thomas
Matt Thomas

Reputation: 463

In my opinion, you're almost in a catch-22 situation. It's either you load the large file into memory (via your collection) which depending on the average size and memory available on the server, might be the best approach, or another alternative would be to iterate through the template files, and for each iteration, load a new file stream opening the large file every time (slower due to file I/O, but low memory consumption), so that you can perform your "search", as we all know the file stream is forward only.

Upvotes: 2

Related Questions