I've seen similar questions on being able to get stderr from lambda in AWS, but instead I'd like the handler to include stderr in the output.
This basic bootstrap bash function similar to what aws provides works...
#!/bin/sh
set -euo pipefail
# Initialization - load function handler
source $LAMBDA_TASK_ROOT/"$(echo $_HANDLER | cut -d. -f1).sh"
# Processing
while true
do
HEADERS="$(mktemp)"
# Get an event. The HTTP request will block until one is received
EVENT_DATA=$(curl -sS -LD "$HEADERS" "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
# Extract request ID by scraping response headers received above
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
set +eo pipefail
# Check if jq is available
if ! command -v ./jq &> /dev/null; then
echo '{"stderr": "Error: jq could not be found."}'
exit 1
fi
# Run the handler function from the script - works but not stderr
RESPONSE=$($(echo "$_HANDLER" | cut -d. -f2) "$EVENT_DATA")
set -eo pipefail
# Send the response
curl "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$RESPONSE"
done
...but if instead I try to capture stderr and stdout to files like below, this doesn't work at all (the vars are empty)
# Run the handler function from the script
$(echo "$_HANDLER" | cut -d. -f2) "$EVENT_DATA" > ./stdout.txt 2> ./stderr.txt
# Read the contents of the files into separate variables
STDOUT=$(cat ./stdout.txt)
STDERR=$(cat ./stderr.txt)
RESPONSE=$(./jq -n --arg stdout "$STDOUT" --arg stderr "$STDERR" '{stdout: $stdout, stderr: $stderr}')
I'm guessing its because AWS are doing some other redirect elsewhere for cloudwatch or something thats blocking this ability.