Reputation: 2423
I'm trying to use VIM to remove a duplicate line in an XML file I created. (I can't recreate the file because the ID numbers will change.)
The file looks something like this:
<tag k="natural" v="water"/>
<tag k="nhd:fcode" v="39004"/>
<tag k="natural" v="water"/>
I'm trying to remove one of the duplicate k="natural" v="water" lines. When I try to use the \_
modifier to include newlines in my regex replaces, VIM doesn't seem to find anything.
Any tips on what regex or tool to use?
Upvotes: 5
Views: 2783
Reputation: 8819
It seems like the bash, python and perl methods would work but you are already in vim. So why not create a function like:
function! RemoveDuplicateLines()
let lines={}
let result=[]
for lineno in range(line('$'))
let line=getline(lineno+1)
if (!has_key(lines, line))
let lines[line] = 1
let result += [ line ]
endif
endfor
%d
call append(0, result)
d
endfunction
Upvotes: 0
Reputation: 3643
First of all, you can use awk
to remove all duplicate lines, keeping their order.
:%!awk '\!_[$0]++'
If you not sure if there are some other duplicate lines you don't want remove, then just add conditions.
:%!awk '\!(_[$0]++ && /tag/ && /natural/ && /water/)'
But, parsing a nested structure like xml with regex is a bad idea, IMHO.
You are going to care them not to be screwed up all the time.
xmllint
gives you a list of specific elements in the file:
:!echo "cat //tag[@k='natural' and @v='water']" | xmllint --shell %
You can slash duplicate lines step by step.
Upvotes: 7
Reputation: 172638
A simple regular expression won't suffice. I've implemented a
:DeleteDuplicateLinesIgnoring
command (as well as related commands) in my PatternsOnText plugin. You can even supply a {pattern}
to exclude certain lines from the de-duplication.
Upvotes: 1
Reputation: 9256
instead of using vim you do something like
sort filename | uniq -c | grep -v "^[ \t]*1[ \t]"
to figure out what is the duplicate line and then just use normal search to visit it and delete it
Upvotes: 1
Reputation: 12413
with python to remove all repeated lines:
#!/usr/bin/env python
import sys
def remove_identical(filein, fileout) :
lines = list()
for line in open(filein, 'r').readlines() :
if line not in lines : lines.append(line)
fout = open(fileout, 'w')
fout.write(''.join(lines))
fout.close()
remove_identical(sys.argv[1], sys.argv[2])
Upvotes: 1
Reputation: 342619
to the OP, if you have bash 4.0
#!/bin/bash
# use associative array
declare -A DUP
file="myfile.txt"
while read -r line
do
if [ -z ${DUP[$line]} ];then
DUP[$line]=1
echo $line >temp
fi
done < "$file"
mv temp "$file"
Upvotes: 1
Reputation: 24412
You could select the lines then do a :'<,'>sort u
if you don't care about the ordering. It will sort and remove duplicates.
Upvotes: 4
Reputation: 754440
Answers using 'uniq' suffer from the problem that 'uniq' only finds adjacent duplicated lines, or the data file is sorted losing positional information.
If no line may ever be repeated, then it is relatively simple to do in Perl (or other scripting language with regex and associative array support), assuming that the data source is not incredibly humungous:
#!/bin/perl -w
# BEWARE: untested code!
use strict;
my(%lines);
while (<>)
{
print if !defined $lines{$_};
$lines{$_} = 1;
}
However, if it is used indiscriminately, this is likely to break the XML since end tags are legitimately repeated. How to avoid this? Maybe by a whitelist of 'OK to repeat' lines? Or maybe only lines with open tags with values are subject to duplicate elimination:
#!/bin/perl -w
# BEWARE: untested code!
use strict;
my(%lines);
while (<>)
{
if (m%^\s*<[^\s>]+\s[^\s>]+%)
{
print if !defined $lines{$_};
$lines{$_} = 1;
}
else
{
print;
}
}
Of course, there is also the (largely valid) argument that processing XML with regular expressions is misguided. This coding assumes the XML comes with lots of convenient line breaks; real XML may not contain any, or only a very few.
Upvotes: 1
Reputation: 19661
Are you trying to search and replace the line with nothing? You could try the g
command instead:
:%g/search_expression_here/d
The d
at the end tells it to delete the lines that match.
You may find more tips here.
Upvotes: 0