Reputation:
I have two files. File A
has a list of words, one on each line. File B
contains another huge list of words, but some are quite long. How would I use sed or awk to take each line from file A
and combine it with each line in file B
that isn't longer that 6 characters? It would ideally spit out all the results in a new file.
For example:
File A:
cool
beans
sad
File B:
armadillo
snake
bread
New File:
coolsnake
coolbread
beanssnake
beanbread
sadsnake
sadbread
Upvotes: 4
Views: 2423
Reputation: 58371
This might work for you:
sed 's|.*|sed "/......./d;s/.*/&\&/" fileB|' fileA | sh
With GNU sed:
sed 's|.*|sed "/......./d;s/.*/&\&/" fileB|e' fileA
Upvotes: 3
Reputation: 246754
with bash:
mapfile -t shortwords < <(sed -r 's/.{7,}/d' B.txt)
while read word; do
for suffix in "${shortwords[@]}"; do
echo "$word$suffix"
done
done < A.txt
Upvotes: 1
Reputation: 36262
One way using perl
:
Content of script.pl
:
use warnings;
use strict;
die qq[Usage: perl $0 <fileA> <fileB>\n] unless @ARGV == 2;
open my $fh, q[<], pop or die $!;
my @words = map { chomp; $_ } grep { length( $_ ) <= 6 } <$fh>;
while ( <> ) {
chomp;
for my $word ( @words ) {
printf qq[%s\n], $_ . $word;
}
}
Run it like:
perl script.pl fileA fileB
With following output:
coolsnake
coolbread
beanssnake
beansbread
sadsnake
sadbread
Upvotes: 1
Reputation: 314
#!/bin/bash
while read line1; do
while read line2;do
[[ $(echo $line2 | wc -c) -lt 7 ]] && \
echo $line1$line2
done < './B.txt'
done < './A.txt'
something like that, just fit it for yourself it gives me:
coolsnake
coolbread
beanssnake
beansbread
sadsnake
sadbread
Upvotes: 2
Reputation: 36262
Not same order that your output, but could be useful:
awk '
FNR == NR {
words[ $1 ] = 1;
next
}
FNR < NR {
if ( length( $1 ) <= 6 )
for ( word in words ) {
print word $0
}
}
' fileA fileB
Output:
coolsnake
sadsnake
beanssnake
coolbread
sadbread
beansbread
Upvotes: 4