mirri66
mirri66

Reputation: 614

remove single elements in a text file in bash

Basically what I have is a text file (file.txt), which contains lines of numbers (lines aren't necessarily the same length) e.g.


1 2 3 4
5 6 7 8
9 10 11 12 13

What I need to do is write new files with each of these numbers deleted, one at a time, with replacement e.g. the first new file will contain


2 3 4 <--- 1st element removed
5 6 7 8
9 10 11 12 13

and the 7th file will contain


1 2 3 4
5 6 8 <--- 7th element removed here
9 10 11 12 13

To generate these, I'm looping through each line, and then each element in each line. E.g. for the 7th file, where I remove the third element of the second line, I'm trying to do this by reading in the line, removing the appropriate element, then reinserting this new line


$lineNo is 2 (second line)
$line is 5 6 7 8
with cut, I remove the third number, making $newline 5 6 8

Then I try to replace the line $lineNo in file.txt with $newline using sed:
sed -n '$lineNo s/.*/'$newline'/' > file.txt

This is totally not working. I get an error
sed: can't read 25.780000: No such file or directory

(where 25.780000 is a number in my text file. It looks like it's trying to use $newline to read files or something)
I have reason to suspect my way of stating which line to replace isn't working either :(

My question is, a) is there a better way to do this rather than sed, and b) if sed is the way to go, what am I doing wrong?

Thanks!!

Upvotes: 3

Views: 397

Answers (4)

Birei
Birei

Reputation: 36272

One way using perl:

Content of file.txt:

1 2 3 4
5 6 7 8
9 10 11 12 13

Content of script.pl:

use warnings;
use strict;

## Read all input to a scalar variable as a single string.
my $str;
{
        local $/ = undef;
        $str = <>;
}

## Loop for each number found.
while ( $str =~ m/(\d+)(?:\h*)?/g ) {

        ## Open file for writing. The name of the file will be
        ## the number matched in previous regexp.
        open my $fh, q[>], ($1 . q[.txt]) or
                die qq[Couldn't create file $1.txt\n];

        ## Print everything prior to matched string plus everything
        ## after matched string.
        printf $fh qq[%s%s], $`, $';

        ## Close file.
        close $fh;
}

Run it like:

perl script.pl file.txt

Show files created:

ls [0-9]*.txt

With output:

10.txt  11.txt  12.txt  13.txt  1.txt  2.txt  3.txt  4.txt  5.txt  6.txt  7.txt  8.txt  9.txt

Show content of one of them:

cat 9.txt

Output:

1 2 3 4
5 6 7 8
10 11 12 13

Upvotes: 0

Idelic
Idelic

Reputation: 15582

Unless I misunderstood the question, the following should work, although it will be pretty slow if your files are large:

#! /bin/bash

remove_by_value()
{
  local TO_REMOVE=$1

  while read line; do 
    out=
    for word in $line; do [ "$word" = "$TO_REMOVE" ] || out="$out $word"; done
    echo "${out/ }"
  done < $2
}

remove_by_position()
{
  local NTH=$1

  while read line; do
    out=
    for word in $line; do
      ((--NTH == 0)) || out="$out $word"
    done
    echo "${out/ }"
  done < $2
}

FILE=$1
shift  
for number; do
  echo "Removing $number"
  remove_by_position $number "$FILE"
done

This will dump all the output to stdout, but it should be trivial to change it so the output for each removed number is redirected (e.g. with remove_by_position $number $FILE > $FILE.$$ && mv $FILE.$$ $FILE.$number and proper quoting). Run it as, say,

$ bash script.sh file.txt $(seq 11)

Upvotes: 3

user unknown
user unknown

Reputation: 36229

I have to admit, that I'm a bit surprised how short the other solutions are.

#!/bin/bash
#
file=$1
lines=$(cat $file | wc -l) 
out=0

dropFromLine () {
    file=$1
    row=$2
    to=$((row-1))
    from=$((row+1))
    linecontent=($(sed -n "${row}p" $file))
    # echo "    linecontent: " ${linecontent[@]}
    linelen=${#linecontent[@]}
    # echo "    linelength: " $linelen
    for n in $(seq 0 $linelen) 
    do
        ( 
        if [[ $row > 1 ]] ; then sed -n "1,${to}p" $file ;fi
        for i in $(seq 0 $linelen) 
        do
            if [[ $n != $i ]]
            then
                echo -n ${linecontent[$i]}" " 
            fi
        done
        echo 
        # echo "mod - drop " ${linecontent[$n]}
        sed -n "$from,${lines}p" $file 
        ) > outfile-${out}.txt
        out=$((out+1))
    done 
}

for row in $(seq 1 $lines)
do 
    dropFromLine $file $row 
done

invocation:

./dropFromRow.sh num.dat

num.dat:

1 2 3 4
5 6 7 8
9 10 11

result:

outfile-0  outfile-10  outfile-12  outfile-2  outfile-4  outfile-6  outfile-8
outfile-1  outfile-11  outfile-13  outfile-3  outfile-5  outfile-7  outfile-9

samples:

asux:~/proj/mini/forum > cat outfile-0
2 3 4  
5 6 7 8
9 10 11
asux:~/proj/mini/forum > cat outfile-1
1 3 4  
5 6 7 8
9 10 11

Upvotes: 1

glenn jackman
glenn jackman

Reputation: 246847

filename=file.txt
i=1
while [[ -s $filename ]]; do
    new=file_$i.txt
    awk 'NR==1 {if (NF==1) next; else sub(/^[^ ]+ /, "")} 1' $filename > $new
    ((i++))
    filename=$new
done

This leaves a space at the beginning the the first line for each new file, and when a line becomes empty the line is removed. The loop ends when the last generated file is empty.


Update due to requirement clarification:

words=$(wc -w < file.txt)
for ((i=1; i<=words; i++)); do 
    awk -v n=$i '
        words < n && n <= words+NF {$(n-words) = "" }
        {words += NF; print}
    ' file.txt > file_$i.txt
done

Upvotes: 3

Related Questions