Reputation: 134255
I want to delete one or more specific line numbers from a file. How would I do this using sed?
Upvotes: 364
Views: 395976
Reputation: 51
To delete several lines in all json files sed -i '1,2d;53,54d;105,106d;157,158d' *.json
Upvotes: 0
Reputation: 333246
If you want to delete lines from 5 through 10 and line 12th:
sed -e '5,10d;12d' file
This will print the results to the screen. If you want to save the results to the same file:
sed -i.bak -e '5,10d;12d' file
This will store the unmodified file as file.bak
, and delete the given lines.
Note: Line numbers start at 1. The first line of the file is 1, not 0.
Upvotes: 554
Reputation:
cat -b /etc/passwd | sed -E 's/^( )+(<line_number>)(\t)(.*)/--removed---/g;s/^( )+([0-9]+)(\t)//g'
cat -b
-> print lines with numbers
s/^( )+(<line_number>)(\t)(.*)//g
-> replace line number to null (remove line)
s/^( )+([0-9]+)(\t)//g
#remove numbers the cat
printed
Upvotes: 0
Reputation: 3227
sed
sed -i '1d' file
As Brian states here, <address><command>
is used, <address>
is <1>
and <command>
<d>
.
Upvotes: 4
Reputation: 5559
You can delete a particular single line with its line number by
sed -i '33d' file
This will delete the line on 33 line number and save the updated file.
Upvotes: 127
Reputation: 189880
This is very often a symptom of an antipattern. The tool which produced the line numbers may well be replaced with one which deletes the lines right away. For example;
grep -nh error logfile | cut -d: -f1 | deletelines logfile
(where deletelines
is the utility you are imagining you need) is the same as
grep -v error logfile
Having said that, if you are in a situation where you genuinely need to perform this task, you can generate a simple sed
script from the file of line numbers. Humorously (but perhaps slightly confusingly) you can do this with sed
.
sed 's%$%d%' linenumbers
This accepts a file of line numbers, one per line, and produces, on standard output, the same line numbers with d
appended after each. This is a valid sed
script, which we can save to a file, or (on some platforms) pipe to another sed
instance:
sed 's%$%d%' linenumbers | sed -f - logfile
On some platforms, sed -f
does not understand the option argument -
to mean standard input, so you have to redirect the script to a temporary file, and clean it up when you are done, or maybe replace the lone dash with /dev/stdin
or /proc/$pid/fd/1
if your OS (or shell) has that.
As always, you can add -i
before the -f
option to have sed
edit the target file in place, instead of producing the result on standard output. On *BSDish platforms (including OSX) you need to supply an explicit argument to -i
as well; a common idiom is to supply an empty argument; -i ''
.
Upvotes: 8
Reputation: 2838
I would like to propose a generalization with awk.
When the file is made by blocks of a fixed size and the lines to delete are repeated for each block, awk can work fine in such a way
awk '{nl=((NR-1)%2000)+1; if ( (nl<714) || ((nl>1025)&&(nl<1029)) ) print $0}'
OriginFile.dat > MyOutputCuttedFile.dat
In this example the size for the block is 2000 and I want to print the lines [1..713] and [1026..1029].
NR
is the variable used by awk to store the current line number.%
gives the remainder (or modulus) of the division of two integers;nl=((NR-1)%BLOCKSIZE)+1
Here we write in the variable nl the line number inside the current block. (see below)||
and &&
are the logical operator OR and AND.print $0
writes the full lineWhy ((NR-1)%BLOCKSIZE)+1:
(NR-1) We need a shift of one because 1%3=1, 2%3=2, but 3%3=0.
+1 We add again 1 because we want to restore the desired order.
+-----+------+----------+------------+
| NR | NR%3 | (NR-1)%3 | (NR-1)%3+1 |
+-----+------+----------+------------+
| 1 | 1 | 0 | 1 |
| 2 | 2 | 1 | 2 |
| 3 | 0 | 2 | 3 |
| 4 | 1 | 0 | 1 |
+-----+------+----------+------------+
Upvotes: 3