Reputation: 2436
In my current directory I have a couple of .txt files. I want to write a script to search for a string in those .txt files, and delete lines which contains that string. For example, I'd like to delete all lines which have the word "start" in all .txt files in my current directory.
I have written the following code, but I don't know how to continue!
#!bin\bash
files=`find . -maxdepth 1 -name \*.txt`
How should I use "while" to go through each file?
Upvotes: 3
Views: 7092
Reputation: 15
with open(file_listoflinks, 'r+', encoding='utf-8') as f_link:
lines = f_link.readlines() # read an store all lines into list
f_link.seek(0) # move file pointer to the beginning of a file
f_link.truncate() # truncate the file
# start writing lines except the first line
# lines[1:] from line 2 to last line
f_link.writelines(lines[1:])
Upvotes: 0
Reputation: 84353
When you use -maxdepth 1
on the current directory, you aren't recursing into subdirectories. If that's the case, there's no need at all to use find just to match files with an extension; you can use shell globs instead to populate your loop constructs. For example:
#!/bin/bash
# Run sed on each file to delete the line.
for file in *txt; do
sed -i '/text to match/d' "$file"
done
This is simple, and avoids a number of filename-related issues that you may have when passing filename arguments between processes. Keep it simple!
Upvotes: 4
Reputation: 6132
Easy cheasy:
sed -i "s/^.*string.*//" *.txt
this will remove any line containing 'string' on each .txt file
Upvotes: 3
Reputation: 798686
You use it along with read
to get each filename in turn, after piping the results of find
to it. Then you just pass the filename to sed
to delete the lines you're interested in.
Upvotes: 1