Reputation: 783
If i have a text file and i want to run two types of operations, but each operation must read each line of the text separately from the other. The only way i know how to do it is
open out,(">>out.txt");
while (<>){
#operation one
}
while (<>){
#operation two
}
close out;
but this will run only on the first while
, in which the operation runs fine, but the second one will not be complete because the second while(<>)
does not actually re-read the file but tries to continue from where the first while left. Which is at the end of the file. So is there another way? Or is there a way to tell the second while to start again at the beginning?
Upvotes: 3
Views: 449
Reputation: 2058
If you were reading from an actual file, you could use
seek FILEHANDLE,0,0;
However, you are using stdin
and I don't think that it's possible to rewind stdin
and start over.
Upvotes: -1
Reputation: 3601
If the data fits into memory:
my @lines = <>;
for ( @lines ){
# operation one
}
for ( @lines ){
# operation two
}
Upvotes: 2
Reputation: 118128
You can localize @ARGV
before the first pass.
#!/usr/bin/env perl
use strict;
use warnings;
{
local @ARGV = @ARGV;
while (<>){
print "Pass 1: $_";
}
}
while (<>){
print "Pass 2: $_";
}
Upvotes: 1
Reputation: 385789
Couldn't you simply use the following?
while (<>) {
operation1($_);
operation2($_);
}
If not, then I'm assuming you need to process the content of all the files using one operation before it's process by the other.
<>
reads from the files listed in @ARGV
, removing them as it opens them, so the simplest solution is to backup @ARGV
and repopulate it.
my @argv = @ARGV;
while (<>) { operation1($_); }
@ARGV = @argv;
while (<>) { operation2($_); }
Of course, it will fail if <>
reads from something other than a plain file or a symlink to a plain file. (Same goes for any solution using seek
.) The only to make that work would be to load the entire file into temporary storage (e.g. memory or a temporary file). The following is the simplest example of that:
my @lines = <>;
for (@lines) { operation1($_); }
for (@lines) { operation2($_); }
Upvotes: 2
Reputation: 53478
Given you mention in a comment:
perl example.pl text.txt
The answer is - don't use <>
and instead open a filehandle.
my ( $filename ) = @ARVG;
open ( my $input, "<", $filename ) or die $!;
while ( <$input> ) {
print;
}
seek ( $input, 0, 0 );
while ( <$input> ) {
#something else
}
Alternatively, you can - assuming test.txt
isn't particularly large - just read the whole thing into an array.
my @input_lines = <$input>;
foreach ( @input_lines ) {
#something
}
If you want to specify multiple files on the command line, you can wrap the whole thing in a foreach
loop:
foreach my $filename ( @ARVG ) {
## open; while; seek; while etc.
}
Upvotes: 4
Reputation: 30
If you need to run the operation line by line, why not try something like this
sub operation_1 {
my $line = shift;
#processing for operation 1
}
sub operation_2 {
my $line = shift;
#processing for operation 2
}
while(<>) {
my $line = $_;
chomp($line);
operation_1($line);
operation_2($line);
}
Upvotes: 0
Reputation: 13792
If no file handle is used with the diamond operator, Perl will examine the @ARGV
special variable. If @ARGV
has no elements, then the diamond operator will read from STDIN
.
This is other way of achieve your requirements:
my @stdin=<>;
foreach my $item( @stdin ) {
# ...
}
foreach my $item( @stdin ) {
# ...
}
Upvotes: 0