I'm processing data via a number of channels where each channel feeds into the next (pipeline processing). I end up with a spawn at the top that looks like this:
let future = async move {
while let Ok(msg) = r.recv().await {
forwarder.receive(msg).await;
}
};
executor_pool::ExecutorPool::spawn(future).detach();
The Forwarder looks like this:
Forwarder {
pub fn validate_sequence(&mut self, msg: TestMessage) -> Result<TestMessage, TestMessage>
pub async fn handle_action(&mut self, cmd: TestMessage);
pub async fn handle_notification(&mut self);
pub async fn receive(&mut self, cmd: TestMessage) {
match self.handle_config(cmd) {
Ok(_) => (),
Err(msg) => match self.validate_sequence(msg) {
Ok(msg) => {
self.handle_action(msg).await;
self.handle_notification().await;
},
Err(msg) => panic!("{} sequence error: expecting {} not cmd {:#?}", self.header(), self.next_seq, msg),
},
}
}
}
Both handle_action and handle_notification call into a sender which is another async fn. My concern is two-fold. The entire pathway to the send (or any other async fn) seems to require an async/await wrapper. In my case I'm 3 deep at the send. Which seems to be a bit ugly, particularly if I have to do any refactoring. Second, is there a runtime cost for each level of async/await or is the compiler smart enough to collapse these? If it helps to make this more concrete, think of it as audio processing where the first stage decodes, the next does leveling, the next does mixing, and then the final does encoding.
To expand of the refactoring concern, let's look at refactoring a for loop.
pub async fn handle_action(&mut self, cmd: FwdMessage) {
match cmd {
FwdMessage::TestData(_) => {
for sender in &self.senders {
for _ in 0 .. self.forwarding_multiplier {
sender.send(FwdMessage::TestData(self.send_count)).await.ok();
self.send_count += 1;
}
}
},
_ => panic!("unhandled action"),
}
}
Rather than a for loop, we'd like to use an iterator. However, that's going to require an async closure -- which is something I haven't figured out how to even express.
Just the act of nesting
asyncfunctions does not introduce any overhead.The
async/awaitsyntax compiles to aFuturethat is basically a state-machine that can execute your code in a way that it can be suspended when it needs to wait for something and resumed later. Anasyncfunction and all (statically known)await'd futures are compiled together into the sameFuture; the function boundaries somewhat melt away. So yes, the compiler is "smart enough to collapse these" together.