Reputation: 7564
Is the output of a Bash command stored in any register? E.g. something similar to $?
capturing the output instead of the exit status.
I could assign the output to a variable with:
output=$(command)
but that's more typing...
Upvotes: 280
Views: 309618
Reputation: 1
Check out asciicinema and their github. It's a terminal session recorder which saves the session to an ascii json based file. Usually you need to tell it to start recording but you could have it start automatically every time you open a new session by placing it in .bashrc
Upvotes: 0
Reputation: 5092
Add this to .bashrc
,
export CAPTUREOUTPUT=-1
sys.capture.output.on(){
exec > >(tee -a /tmp/output)
CAPTUREOUTPUT=0
debug "capture is on"
}
sys.capture.output.off(){
exec > /dev/tty
CAPTUREOUTPUT=-1
debug "capture is off"
}
sys.capture.output.toggle(){
if (( $CAPTUREOUTPUT == 0 ))
then
debug "turning off output capture"
sys.capture.output.off
else
debug "turning on output capture"
sys.capture.output.on
fi
}
sys.capture.output.last(){
n=${1:-1}
tail -n"$n" /tmp/output
}
sys.capture.output.last.copy(){
desktop.clipboard.set "$(sys.capture.output.last)"
}
debug(){
echo $@ > /dev/stderr
}
sys.capture.output.last.paste(){
READLINE_LINE="$READLINE_LINE$(sys.capture.output.last)";
READLINE_POINT=${#READLINE_LINE}
}
bind -x '"\C-x\C-v": sys.capture.output.last.paste'
bind -x '"\C-x\C-x": sys.capture.output.last.copy'
bind -x '"\C-x\:": sys.capture.output.toggle'
Then use it like this:
# start capturing output with C-x followed by :
# you should see the following debug lines
turning on output capture
capture is on
$
# then run any command
$ shuf -n1 DATA/irc.urls
https://i.imgur.com/eZU9Eyu.gif
# you can paste the last line of output with C-x C-v
$ waterfox <C-x C-v> # this should paste https://i.imgur.com/eZU9Eyu.gif
if you'd like to stop capturing output, simply turn it off by typing C-x : again, and you should see the following debug output
turning off output capture
capture is off
$
If you'd like to put the last line of output to the clipboard, just hit C-x C-x. Now you can paste it somewhere else.
Here's the script you need to put in your PATH in order to put things in your clipboard
$ cat desktop.clipboard.set
#!/bin/bash
if (( $# > 0 ))
then
xclip -selection clipboard <<< "$@"
else
xclip -selection clipboard
fi
$
Upvotes: 0
Reputation: 428
bind '"\C-h": "\C-e`\"\'\C-a history -s \"`2>&1 \C-j"'
After that I'm able to press Ctrl-h instead of ENTER and both standard output and standard error are being copied to history. Useful for e.g. when typing a COMMAND
results in
Command 'COMMAND' not found, but can be installed with:
sudo apt install PACKAGE
P.S. C-a
is a Readline Key Binding
to move to beginning of the line, C-e
- to the end (from man bash
). I could not find C-j
there but it seems it 's like ENTER.
Solution based on https://superuser.com/a/135856/1264656.
Edit: added C-e
to allow editing command and cursor be not only at the end; added quoting to prevent pathname expansion by history -s
(of e.g. ll
as it often includes *
). Per comments to linked answer C-j
control character mentioned in https://en.wikipedia.org/wiki/Control_character as ^J
.
Upvotes: 1
Reputation: 11728
Although the question is specifically for Bash, it may help users to know that in zsh the results of a command can inserted for execution as follows:
A trivial example (press tab not enter after this line): $(echo "echo hello from inside")
Upvotes: 0
Reputation:
None of the other current answers are actual solutions, because they missed the point. Here’s how to read the output of commands
First of all, you have to realize that your shell does not know any of the output. All answers focusing on that can be dismissed.
The commands, when executed, have three files that they can open and use. stdin, stdout, and stderr. Those are also available as /proc/self/[0-2]
or /dev/pts/[0-2]
.
Those are pseudo-terminal slaves. Pseudo-terminals are essentially just named pipes, With a master and a slave end. The master end is just a file descriptor that a process gets, by opening /dev/pts/ptmx. (Which stands for pseudo-termunal master muxer.) … there used to be separate files for these somewhere in /dev/, and on true Unixes there still are (because everything is a file!), but GNU/Linux once again forget the lessons of its ancestors, and lost the power that came with them.
And so we end up with the place where the actual data resides that your processes put out, being in a buffer of your virtual terminal program. Be it a standard Linux virtual terminal console, or a X program like xterm or Konsole, or GNOME Terminal. Which then render it out via some graphical API.
Meaning, if you want to know those contents, you have to ask those programs!
The standard interface for this is ANSI terminal commands. (You know, what lets you make text colorful, move the cursor, find the mouse pointer position or ever draw out pixels on the most modern terminals.
Unfortunately, I do not know of any terminal that supports accessing its buffer.
Plus there’s the difficulty of them all having different interfaces.
Fortunately, we can inject our own intermediate layer virtual terminal of our choice that supports accessing its buffer.
Which is a good idea to use as your login shell anyway, not only if you’re connecting via SSH and the connection might break mid-session. But also if you want to move your session somewhere else, e.g. to restart X.
The natural choice for this is of course something like screen
. Which lets you log its entire session to a “file”. Which does not need to be a physical file, thanks to Unix principles. We can use a named pipe. And gather everything from there.
Here’s how the “outer” part of the solution, that goes to the end of your .bashrc
, could look like:
if shopt -q login_shell && [[ $- == *i* ]]; then # Login + interactive?
export ANS_FIFO=/tmp/$$.history ## $$ = current shell PID.
mkfifo -m 600 $ANS_FIFO
screen -OU -L -Logfile $ANS_FIFO # With optimal mode + UTF-8
rm $ANS_FIFO # Clean up
fi
Now all the “inner” part has to do, is read $ANS_FIFO
into $ANS
after each command execution, to make it available as $ANS
for following commands:
export PROMPT_COMMAND+='[[ -v ANS_FIFO ]] && export ANS="$(< $ANS_FIFO)"'
It tells bash to run a command on each prompt, where it checks if $ANS_FIFO
is set, and reads the file it is set to into $ANS
. You can put this is your .bashrc
too.
This technically works, and I have tested it (including escape sequences and UTF-8), BUT:
Problem: While this theoretically works, in practice, your shell will hang quickly, because a named pipe is not meant to be a buffer and is treated as synchronous (even though technically, on Linux, it does have a buffer, and you an even change its size programmatically), and so reading $ANS_FIFO
when it’s empty, will wait for screen
to write something (a line of text, presumably) to it, until it returns.
$ANS
then. Combined with $ANS
always being emptied first after a command, but before the listening server sets it and before the shell returns to the prompt, this would mean that $ANS
is always set to only what screen
logged since the last prompt, and reading it won’t lock you up.$ANS
variable. Just like electronic calculators have been doing it for decades.Unfortunately I have to go do other things now, since I sat on this for half the day. If anyone is willing to implement this before me, feel free to edit my answer. If you can’t, just add your own answer. I may or may not do it myself, next time I need this. :)
Upvotes: 7
Reputation: 4844
One that I've used for years.
.bashrc
or .bash_profile
)# capture the output of a command so it can be retrieved with ret
cap () { tee /tmp/capture.out; }
# return the output of the most recent command that was captured by cap
ret () { cat /tmp/capture.out; }
$ find . -name 'filename' | cap
/path/to/filename
$ ret
/path/to/filename
I tend to add | cap
to the end of all of my commands. This way when I find I want to do text processing on the output of a slow running command I can always retrieve it with ret
.
Upvotes: 94
Reputation: 11
Demo for non-interactive commands only: http://asciinema.org/a/395092
For also supporting interactive commands, you'd have to hack the script
binary from util-linux to ignore any screen-redrawing console codes, and run it from bashrc to save your login session's output to a file.
Upvotes: 0
Reputation: 336
Yeah, why type extra lines each time; agreed. You can redirect the returned from a command to input by pipeline, but redirecting printed output to input (1>&0) is nope, at least not for multiple line outputs. Also you won't want to write a function again and again in each file for the same. So let's try something else.
A simple workaround would be to use printf function to store values in a variable.
printf -v myoutput "`cmd`"
such as
printf -v var "`echo ok;
echo fine;
echo thankyou`"
echo "$var" # don't forget the backquotes and quotes in either command.
Another customizable general solution (I myself use) for running the desired command only once and getting multi-line printed output of the command in an array variable line-by-line.
If you are not exporting the files anywhere and intend to use it locally only, you can have Terminal set-up the function declaration. You have to add the function in ~/.bashrc
file or in ~/.profile
file. In second case, you need to enable Run command as login shell
from Edit>Preferences>yourProfile>Command
.
Make a simple function, say:
get_prev() # preferably pass the commands in quotes. Single commands might still work without.
{
# option 1: create an executable with the command(s) and run it
#echo $* > /tmp/exe
#bash /tmp/exe > /tmp/out
# option 2: if your command is single command (no-pipe, no semi-colons), still it may not run correct in some exceptions.
#echo `"$*"` > /tmp/out
# option 3: (I actually used below)
eval "$*" > /tmp/out # or simply "$*" > /tmp/out
# return the command(s) outputs line by line
IFS=$(echo -en "\n\b")
arr=()
exec 3</tmp/out
while read -u 3 -r line
do
arr+=($line)
echo $line
done
exec 3<&-
}
So what we did in option 1 was print the whole command to a temporary file /tmp/exe
and run it and save the output to another file /tmp/out
and then read the contents of the /tmp/out
file line-by-line to an array.
Similar in options 2 and 3, except that the commands were exectuted as such, without writing to an executable to be run.
In main script:
#run your command:
cmd="echo hey ya; echo hey hi; printf `expr 10 + 10`'\n' ; printf $((10 + 20))'\n'"
get_prev $cmd
#or simply
get_prev "echo hey ya; echo hey hi; printf `expr 10 + 10`'\n' ; printf $((10 + 20))'\n'"
Now, bash saves the variable even outside previous scope, so the arr
variable created in get_prev
function is accessible even outside the function in the main script:
#get previous command outputs in arr
for((i=0; i<${#arr[@]}; i++))
do
echo ${arr[i]}
done
#if you're sure that your output won't have escape sequences you bother about, you may simply print the array
printf "${arr[*]}\n"
get_prev()
{
usage()
{
echo "Usage: alphabet [ -h | --help ]
[ -s | --sep SEP ]
[ -v | --var VAR ] \"command\""
}
ARGS=$(getopt -a -n alphabet -o hs:v: --long help,sep:,var: -- "$@")
if [ $? -ne 0 ]; then usage; return 2; fi
eval set -- $ARGS
local var="arr"
IFS=$(echo -en '\n\b')
for arg in $*
do
case $arg in
-h|--help)
usage
echo " -h, --help : opens this help"
echo " -s, --sep : specify the separator, newline by default"
echo " -v, --var : variable name to put result into, arr by default"
echo " command : command to execute. Enclose in quotes if multiple lines or pipelines are used."
shift
return 0
;;
-s|--sep)
shift
IFS=$(echo -en $1)
shift
;;
-v|--var)
shift
var=$1
shift
;;
-|--)
shift
;;
*)
cmd=$option
;;
esac
done
if [ ${#} -eq 0 ]; then usage; return 1; fi
ERROR=$( { eval "$*" > /tmp/out; } 2>&1 )
if [ $ERROR ]; then echo $ERROR; return 1; fi
local a=()
exec 3</tmp/out
while read -u 3 -r line
do
a+=($line)
done
exec 3<&-
eval $var=\(\${a[@]}\)
print_arr $var # comment this to suppress output
}
print()
{
eval echo \${$1[@]}
}
print_arr()
{
eval printf "%s\\\n" "\${$1[@]}"
}
Ive been using this to print space-separated outputs of multiple/pipelined/both commands as line separated:
get_prev -s " " -v myarr "cmd1 | cmd2; cmd3 | cmd4"
For example:
get_prev -s ' ' -v myarr whereis python # or "whereis python"
# can also be achieved (in this case) by
whereis python | tr ' ' '\n'
Now tr
command is useful at other places as well, such as
echo $PATH | tr ':' '\n'
But for multiple/piped commands... you know now. :)
-Himanshu
Upvotes: 6
Reputation: 582
You can use -exec to run a command on the output of a command. So it will be a reuse of the output as an example given with a find
command below:
find . -name anything.out -exec rm {} \;
you are saying here -> find a file called anything.out in the current folder, if found, remove it. If it is not found, the remaining after -exec will be skipped.
Upvotes: -1
Reputation: 133
If you don't want to recompute the previous command you can create a macro that scans the current terminal buffer, tries to guess the -supposed- output of the last command, copies it to the clipboard and finally types it to the terminal.
It can be used for simple commands that return a single line of output (tested on Ubuntu 18.04 with gnome-terminal
).
Install the following tools: xdootool
, xclip
, ruby
In gnome-terminal
go to Preferences -> Shortcuts -> Select all
and set it to Ctrl+shift+a
.
Create the following ruby script:
cat >${HOME}/parse.rb <<EOF
#!/usr/bin/ruby
stdin = STDIN.read
d = stdin.split(/\n/)
e = d.reverse
f = e.drop_while { |item| item == "" }
g = f.drop_while { |item| item.start_with? "${USER}@" }
h = g[0]
print h
EOF
In the keyboard settings add the following keyboard shortcut:
bash -c '/bin/sleep 0.3 ; xdotool key ctrl+shift+a ; xdotool key ctrl+shift+c ; ( (xclip -out | ${HOME}/parse.rb ) > /tmp/clipboard ) ; (cat /tmp/clipboard | xclip -sel clip ) ; xdotool key ctrl+shift+v '
The above shortcut:
Upvotes: 1
Reputation: 10017
You can use $(!!)
to recompute (not re-use) the output of the last command.
The !!
on its own executes the last command.
$ echo pierre
pierre
$ echo my name is $(!!)
echo my name is $(echo pierre)
my name is pierre
Upvotes: 320
Reputation: 49
I think using script command might help. Something like,
script -c bash -qf fifo_pid
Using bash features to set after parsing.
Upvotes: 0
Reputation: 135
I have an idea that I don't have time to try to implement immediately.
But what if you do something like the following:
$ MY_HISTORY_FILE = `get_temp_filename`
$ MY_HISTORY_FILE=$MY_HISTORY_FILE bash -i 2>&1 | tee $MY_HISTORY_FILE
$ some_command
$ cat $MY_HISTORY_FILE
$ # ^You'll want to filter that down in practice!
There might be issues with IO buffering. Also the file might get too huge. One would have to come up with a solution to these problems.
Upvotes: 0
Reputation: 2914
Inspired by anubhava's answer, which I think is not actually acceptable as it runs each command twice.
save_output() {
exec 1>&3
{ [ -f /tmp/current ] && mv /tmp/current /tmp/last; }
exec > >(tee /tmp/current)
}
exec 3>&1
trap save_output DEBUG
This way the output of last command is in /tmp/last and the command is not called twice.
Upvotes: 13
Reputation: 391
Not sure exactly what you're needing this for, so this answer may not be relevant. You can always save the output of a command: netstat >> output.txt
, but I don't think that's what you're looking for.
There are of course programming options though; you could simply get a program to read the text file above after that command is run and associate it with a variable, and in Ruby, my language of choice, you can create a variable out of command output using 'backticks':
output = `ls` #(this is a comment) create variable out of command
if output.include? "Downloads" #if statement to see if command includes 'Downloads' folder
print "there appears to be a folder named downloads in this directory."
else
print "there is no directory called downloads in this file."
end
Stick this in a .rb file and run it: ruby file.rb
and it will create a variable out of the command and allow you to manipulate it.
Upvotes: 1
Reputation: 10852
Like konsolebox said, you'd have to hack into bash itself. Here is a quite good example on how one might achieve this. The stderred repository (actually meant for coloring stdout) gives instructions on how to build it.
I gave it a try: Defining some new file descriptor inside .bashrc
like
exec 41>/tmp/my_console_log
(number is arbitrary) and modify stderred.c
accordingly so that content also gets written to fd 41. It kind of worked, but contains loads of NUL bytes, weird formattings and is basically binary data, not readable. Maybe someone with good understandings of C could try that out.
If so, everything needed to get the last printed line is tail -n 1 [logfile]
.
Upvotes: 3
Reputation: 1818
If you are on mac, and don't mind storing your output in the clipboard instead of writing to a variable, you can use pbcopy and pbpaste as a workaround.
For example, instead of doing this to find a file and diff its contents with another file:
$ find app -name 'one.php'
/var/bar/app/one.php
$ diff /var/bar/app/one.php /var/bar/two.php
You could do this:
$ find app -name 'one.php' | pbcopy
$ diff $(pbpaste) /var/bar/two.php
The string /var/bar/app/one.php
is in the clipboard when you run the first command.
By the way, pb in pbcopy
and pbpaste
stand for pasteboard, a synonym for clipboard.
Upvotes: 16
Reputation: 785098
One way of doing that is by using trap DEBUG
:
f() { bash -c "$BASH_COMMAND" >& /tmp/out.log; }
trap 'f' DEBUG
Now most recently executed command's stdout and stderr will be available in /tmp/out.log
Only downside is that it will execute a command twice: once to redirect output and error to /tmp/out.log
and once normally. Probably there is some way to prevent this behavior as well.
Upvotes: 11
Reputation: 75478
The answer is no. Bash doesn't allocate any output to any parameter or any block on its memory. Also, you are only allowed to access Bash by its allowed interface operations. Bash's private data is not accessible unless you hack it.
Upvotes: 131