mlin
mlin

Reputation: 11

How can I open a program by pipe and store the output into a variable in Perl?

I have two cols of data. How can I get the min and max values of each column?

I'm trying something like:

open $in, "|gmt minmax -C";
print $in ...;
...
close($in);

However, the output will be printed to the screen. How can I store the results in @minmax(with backticks?)

Upvotes: 0

Views: 290

Answers (1)

Tanktalus
Tanktalus

Reputation: 22254

There are many ways to pull this off.

  1. Use a module.

There are many modules that can handle piping for you in a way that is much simpler than doing it yourself. IPC::Run, IPC::Run3, or, my current favourite, AnyEvent::Util::run_cmd. This is just the tiniest of sampling - look around in CPAN, I'm sure you'll find dozens more. Some of these can direct output into scalars, some also allow you to handle the input as it comes in, which can be advantageous when you receive a lot of input possibly over a long period of time, and can filter it on the way through to only extract the portions of data you're interested in.

  1. Use pipe yourself.

This is what some of the modules do. Create a pair of bidirectional pipes, fork, close off the ends of the pipes you don't need in their respective processes, hook up the right pipes to your stdout and stdin, and exec the child process. It's tricky to get it all right, thus the reason for the modules.

  1. Find the min and max yourself.

You have the data in your process space already (or you have the file), just read it in to a pair of arrays, and call List::Util::min and List::Util::max on the arrays. (I'm assuming they're simple numbers - if they're complex in some form, you may have to implement your own min/max.)

  1. Treat data as data.

If your data is structured, one of my favourites is to simply feed it in to DBD::CSV and then I can execute SQL on it - you can even feed your data directly from in memory if that's what you have. It's a bit slower at runtime, but can do some pretty useful things. This also is a good segue into putting the data into a real (or real-ish) database, whether that's SQLite as an intermediate (this is never my ultimate plan, but it makes a great dev tool or proof-of-concept that may turn out to be viable for years), or mySQL, postgres, DB2, Oracle, whatever, especially if your data set continues to grow.

Upvotes: 2

Related Questions