I currently have two retry policies configured for making some api calls which are being executed using a PolicyWrap
:
- A WaitAndRetry policy for catching 429 rate limit errors and honoring the retry-after header
- A regular retry policy for handling timeouts/transient errors where waiting is not necessary and the call can be retried immediately
Is it possible to configure these to share the same retry counter? Say for example I wanted to configure 5 attempts to send the message regardless of which policy catches it.
Simplified example of my current configuration:
int maxAttempts = 5;
AsyncRetryPolicy RetryAfter = Policy
.Handle<HttpResponseException>(e => e.Response.StatusCode == HttpStatusCode.TooManyRequests)
.WaitAndRetryAsync(retryCount: maxAttempts, i => TimeSpan.FromSeconds(1));
AsyncRetryPolicy RetryNow = Policy
.Handle<HttpResponseException>(e => e.Response.StatusCode == HttpStatusCode.RequestTimeout)
.RetryAsync(retryCount: maxAttempts);
AsyncPolicyWrap ApiPolicy = Policy.WrapAsync(RetryNow, RetryAfter);
I'm using onRetryAsync
to log the retry attempt. When it executes I get an output along the lines of the following where the RetryAfter
policy resets it's retry counter whenever the RetryNow
policy triggers.
Received error code ServerTimeout with internal status 408. Retry attempt #1
Received error code ServerError with internal status 429. Retry attempt #1 after 00:00:00.6010000
Received error code ServerTimeout with internal status 408. Retry attempt #2
Received error code ServerError with internal status 429. Retry attempt #1 after 00:00:00.5000000
Received error code ServerError with internal status 429. Retry attempt #2 after 00:00:00.3880000
Received error code ServerTimeout with internal status 408. Retry attempt #3
Received error code ServerError with internal status 429. Retry attempt #1 after 00:00:00.5230000
Received error code ServerTimeout with internal status 408. Retry attempt #4
Received error code ServerError with internal status 429. Retry attempt #1 after 00:00:00.5000000
Received error code ServerError with internal status 429. Retry attempt #2 after 00:00:00.1740000
Received error code ServerTimeout with internal status 408. Retry attempt #5
This means it's theoretically possible to have a total execution count far greater than the sum of configured retryCount
on the policies. Ideally I would like this to execute a maximum of 5 retries total, but at this point I'd be happy to just have both policies honor their configured retryCount
for a maximum of 10 retries. As It stands, the theoretical maximum with this configurations seems to be greater than 35 total: the 1 initial execution + (5 RetryAfters
+ 1 RetryNow
repeated for every RetryNow
attempt)
Am I doing something wrong with this? Is there a recommended way to handle this kind of situation? I'm still trying to get my head around the policy configuration so I assume I'm missing some kind of recommended/best practice here. This seems like a major oversight otherwise and surely I wouldn't be the only person to run into this problem if that were the case but I couldn't find anything about it in the documentation.
Solution 1: Combine the two retry policies into one
You can handle both exceptions in one policy. You can use the
.Or<>()
syntax to specify the second exception, and this overload to choose the duration of wait-before-retry based on the exception (and other factors):This retry policy will be limited to
1 + maxAttempts
tries overall, and reflects the retry delays in your original configuration.Solution 2: Govern the overall execution time to curb 'excessive' retries
Alternatively, if you find it clearer still to express the two retries as separate policies (ie as the original
RetryAfter
andRetryNow
), then another option is to wrap aTimeoutPolicy
outside them.That
TimeoutPolicy
will limit the overall duration of the combined execution - including all tries and waits between retries - providing a time-based way to prevent the combined retries multiplying unwantedly.If you want a timeout-per-try, you can also introduce that at the inner end of the PolicyWrap.