Gravel
Gravel

Reputation: 465

Shell script to split text file in multiple files

I have a text file that looks like

file1 value1
file1 value2
file2 value1
file2 value2
file2 value3

I want to split the text file in multiple files, with the filenames from column1 and column2 as content of the files, like

file1_new.txt
value1
value2

file2_new.txt
value1
value2
value3

I tried it with the following script, but that does not work. Can someone please help me with rewriting the script?

cat input.txt | while read file value; do 
    echo "$value" > "$file"_new.txt  
done

Upvotes: 0

Views: 923

Answers (4)

James Brown
James Brown

Reputation: 37404

You could use this awk:

$ awk '{print $2 > $1 "_new.txt"}' file

Since it leaves open all the files you may run out of fds. In that case you should close() the files after each write and change the > to >> for appending:

$ awk '{print $2 > $1 "_new.txt"; close($1 "_new.txt")}' file

Naturally, if the file is sorted on $1 you only need to close() the output file once it changes.

Upvotes: 1

imabug
imabug

Reputation: 378

echo "$value" > "$file"_new.txt will create a new $file_new.txt each time, writing $value into it.

You want to use the >> operator to append each $value to $file.

See the bash manual on redirections

Upvotes: 0

iBug
iBug

Reputation: 37227

Don't overwrite the file. Append to the file instead:

cat input.txt | while read file value; do 
    echo "$value" >> "$file"_new.txt  
done

Note my change from > to >>, which means "append to the file".

When you use >, if the target file exists, its content will be cleared before the current command writes its output to the file, so the files end up containing the last value. When you use >>, the file's content won't be cleared and new output is appended to the end of the file, which is what you want.

Upvotes: 3

samthegolden
samthegolden

Reputation: 1490

Use >> instead of > to append.

Upvotes: 3

Related Questions