Is there a modeling tool where I can evaluate different configurations of AWS Lambda functions
I have a workload that needs be processed. I could build one function or several small functions chaining them with queues. I want to determine which combination is going to be quickest. I can provide processing rate for each function.
Some examples of the combinations I am considering are
- one large function (P)
Workload in -> Function -> Result out
- several small functions (A, B, C) but chained by queues (Q)
Workload in -> A -> Q -> B -> Q -> C -> Result out
- several small functions chained this way
-> Q -> B
/ \
Workload in -> A --> Q -> C -> X -> Result out
\ /
-> Q -> D
Based on the research I have done so far, I could use Little’s law to determine the number of items in the queue. (Number of items in system = arrival rate X processing time).
However, it seems to apply to only one process. If I have a chain of processes, how do I apply Little’s law (or is there a different law/formula to apply)?
Appreciate any insights into how I can model this, any tool or formula to help choose an optimal configuration.
You can use AWS step functions for this. You would need to define a state machine where you can execute multiple lambda functions in the sequence you want.