Reputation: 1416
I'm looking for a way to limit the amount of output produced by all command line programs in Linux, and preferably tell me when it is limited.
I'm working over a server which has a lag on the display. Occasionally I will accidentally run a command which outputs a large amount of text to the terminal, such as cat
on a large file or ls
on a directory with many files. I then have to wait a while for all the output to be printed to the terminal.
So is there a way to automatically pipe all output into a command like head
or wc
to prevent too much output having to be printed to terminal?
Upvotes: 5
Views: 5171
Reputation: 5617
this makes me think of bash-completion.
As complete
command in bash enables you to specify handler when a program is not found,
what about write your own handler and clear $PATH
, in order to execute every command with redirection to a filtering pipe?
#Did not try it myself.
Upvotes: 1
Reputation: 77135
Making aliases of all your commands would be a good start. Something like
alias lm="ls -al | more"
alias cam="cat $@ | more"
Upvotes: 1
Reputation: 12214
Assuming you're working over a network connection, like ssh, into a remote server then try piping the output of the command to less
. That way you can manage and navigate the output from the program on the server better. Use 'j' and 'k' to move up and down per line and 'ctrl-u' and 'ctrl-d' to move 1/2 a page up and down. When you do this only the relevant text (i.e. what fits on the screen) will be transmitted over the network.
Upvotes: 0
Reputation: 16804
I don't know about the general case, but for each well-known command (cat, ls, find?) you could do the following:
So along these lines (utterly untested):
$ ln `which cat` ~/bin/old_cat
function trunc_cat () {
`old_cat $@ | head -n 100`
}
alias cat=trunc_cat
Upvotes: 3