Jahid
Jahid

Reputation: 22428

should I avoid bash -c, sh -c, and other shells' equivalents in my shell scripts?

Consider the following code:

#!/bin/bash -x
VAR='1 2 3'
bash -c "echo "\$$VAR""
eval "echo "\$$VAR""
bash -c "echo \"\$$VAR\""
eval "echo \"\$$VAR\""

Which outputs:

+ VAR='1 2 3'
+ bash -c 'echo $1' 2 3
3
+ eval 'echo $1' 2 3
++ echo 2 3
2 3
+ bash -c 'echo "$1 2 3"'
 2 3
+ eval 'echo "$1 2 3"'
++ echo ' 2 3'
 2 3

It seems both eval and bash -c interprets the codes the same way i.e "echo "\$$VAR"" to 'echo $1' 2 3 and "echo \"\$$VAR\"" to 'echo "$1 2 3"'.

The only difference I seem to notice is that bash -c opens a subshell and thus varies the results from eval. For example, in

bash -c 'echo $1' 2 3

2 and 3 are the positional parameters for the subshell. On the other hand, in

eval 'echo $1' 2 3

they are just another arguments for echo.

So my question is, is the -c option (bash -c, sh -c or other shells' equivalents) safe to use or is it evil like eval?

Upvotes: 3

Views: 945

Answers (2)

Charles Duffy
Charles Duffy

Reputation: 295278

Yes, you should avoid using sh -c and equivalents in your shell scripts, just as you avoid eval.

eval "$foo"

...is, when one boils it down, a security risk because it treats data as code, restarting the parsing process at the very beginning (thus, running expansions, redirections, etc). Moreover, because quoting contexts are considered in this process, content inside of the data being evaluated is able to escape quotes, terminate commands, and otherwise attempt to evade any efforts made at security.

sh -c "$foo"

does the same -- running expansions, redirections, and the like -- only in a completely new shell (not sharing non-exported variables or other state).

Both of these mean that content such as $(rm -rf /) is prone to being expanded unless great care is taken, rather than ensuring that -- as is generally the case -- data will only ever be treated as data, which is a foundational element of writing secure code.


Now, the happy thing here is that when you're using bash (or zsh, or ksh93) rather than sh, you can avoid using eval in almost all cases.

For instance:

value=$(eval "echo \$$varname")

...can be replaced with:

value=${!varname}

...and in bash,

eval "$varname="'$value'

...can be replaced with...

printf -v varname %s "$value"

...and so forth (with ksh93 and zsh having direct equivalents to the latter); more advanced formulations involving associative maps and the like are best addressed with the new bash 4.3 (and ksh93) namevar support.


Notably, bash -c doesn't replace eval effectively in most of the above examples, because it doesn't run in the same context: Changes made to shell state are thrown away when the child process exits; thus, not only does bash -c not buy safety, but it doesn't work as an eval replacement to start with.

Upvotes: 4

rghome
rghome

Reputation: 8819

As far as I am aware, there is no reason to use bash -c from a bash shell. Using it will start a new process, which is expensive.

You can use eval, which doesn't start a new process. If you want a sub-shell (for example, to preserve the environment) you can use parentheses.

Normally, bash -c (or other shells with -c) is used to execute a command from another non-shell environment (or perhaps a DSL interpreted by a shell) where shell expansion of the arguments is required. In the old days, you might use it with execvp in a C program, for example. These days, there are usually ways of running a command using a shell in most environments.

Upvotes: 0

Related Questions