Bash: Checking for exit status of multi-pipe command chain

5.5k views Asked by At

I have a problem checking whether a certain command in a multi-pipe command chain did throw an error. Usually this is not hard to check but neither set -o pipefail nor checking ${PIPESTATUS[@]} works in my case. The setup is like this:

cmd="$snmpcmd $snmpargs $agent $oid | grep <grepoptions> for_stuff | cut -d',' f$fields | sed 's/ubstitute/some_other_stuff/g'"

Note-1: The command was tested thoroughly and works perfectly.

Now, I want to store the output of that command in an array called procdata. Thus, I did:

declare -a procdata
procdata=( $(eval $cmd) )

Note-2: eval is necessary because otherwise $snmpcmd throws up with an invalid option -- <grepoption> error which makes no sense because <grepoption> is not an $snmpcmd option obviously. At this stage I consider this a bug with $snmpcmd but that's another show...

If an error occurres, procdata will be empty. However, it might be empty for two different reasons: either because an error occurred while executing the $snmpcmd (e.g. timeout) or because grep couldn't find what it was looking for. The problem is, I need to be able to distinguish between these two cases and handle them separately.

Thus, set -o pipefail is not an option since it will propagate any error and I can't distinguish which part of the pipe failed. On the other hand echo ${PIPESTATUS[@]} is always 0 after procdata=( $(eval $cmd) ) even though I have many pipes!?. Yet if I execute the whole command directly at the prompt and call echo ${PIPESTATUS[@]} immediately after, it returns the exit status of all the pipes correctly.

I know I could bind the err stream to stdout but I would have to use heuristic methods to check whether the elements in procdata are valid or error messages and I run the risk of getting false positives. I could also pipe stdout to /dev/null and capture only the error stream and check whether ${#procdata[@]} -eq 0. But I'd have to repeat the call to get the actual data and the whole command is time costly (ca. 3-5s). I wouldn't want to call it twice. Or I could use a temporary file to write errors to but I'd rather do it without the overhead of creating/deleting files.

Any ideas how I can make this work in bash?

Thanks

P.S.:

$ echo $BASH_VERSION
4.2.37(1)-release
1

There are 1 answers

10
devnull On BEST ANSWER

A number of things here:

(1) When you say eval $cmd and attempt to get the exit values of the processes in the pipeline contained in the command $cmd, echo "${PIPESTATUS[@]}" would contain only the exit status for eval. Instead of eval, you'd need to supply the complete command line.

(2) You need to get the PIPESTATUS while assigning the output of the pipeline to the variable. Attempting to do that later wouldn't work.


As an example, you can say:

foo=$(command | grep something | command2; echo "${PIPESTATUS[@]})"

This captures the output of the pipeline and the PIPESTATUS array into the variable foo.

You could get the command output into an array by saying:

result=($(head -n -1 <<< "$foo"))

and the PIPESTATUS array by saying

tail -1 <<< "$foo"