What is the use case for EPOLLET?

7k views Asked by At

epoll in edge trigger mode is a strange beast. It requires the process to keep track of what the last response for each monitored FD is. It mandates the process to handle, without fail, each and every event reported (or else we might think that an FD is not reporting anything whilst it is, in fact, muted by the edge trigger behavior).

What are the use cases where edge trigger epoll makes sense?

1

There are 1 answers

0
Shachar Shemesh On BEST ANSWER

The main use case for EPOLLET that I'm aware of is with micro-threads.

To recap - user space is doing context switches between micro-threads (which I'm going to call "fibers" because it's shorter) based on the availability of something to work on. This is also called "collaborative multi-tasking".

The basic handling of file descriptors is by wrapping the relevant IO functions like so:

ssize_t read(int fd, void *buffer, size_t length) {
  // fd should already be in O_NONBLOCK mode
  while(true) {
    ssize_t result = ::read(fd, buffer, length); // The real read
    if( result!=-1 || (errno!=EAGAIN && errno!=EWOULDBLOCK) )
      return result;

    start_monitoring(fd, READ);
    wait_event();
  }
}

start_monitoring is a function that makes sure that fd is monitored for read availability. wait_event performs a context switch out until the scheduler re-awakens this fiber because fd now has data ready for reading.

The usual way to implement this with epoll is to call EPOLL_CTL_MOD on fd within start_monitoring to add listening for EPOLLIN, and again after the epoll has reported the event to stop listening for EPOLLIN.

This means that a read that has data available will finish within 1 system call, but a read that returns EAGAIN will take at least 4 system calls (original read, two EPOLL_CTL_MOD, and the final read that succeeds).

Notice that the above does not count the epoll_wait that also has to take place. I do not count it because I'm taking the generous assumption that other fibers are also about to be woken with that same system call, so it is unfair to attribute its cost entirely to our fiber. All in all, this mechanism needs 4+x system calls, where x is between 0 and one.

One way to reduce the cost is to use EPOLLONESHOT. Doing so removes fd from monitoring automatically, reducing our cost to 3+x. Better, but we can do better yet.

Enter EPOLLET. The previous fd state can be either armed or unarmed (i.e. - whether the next event will trigger the epoll). Also, the fd may or may not currently (at the point of entry to read) have data ready. Four states. Let's spread them out.

Ready (whether armed or not): The first call to read returns the data. 1 system call. This path does not change the armed state, and ready state depends on whether we read everything.

Not ready (whether armed or not): The first call to read returns EAGAIN, thus arming the fd. We go to sleep in wait_event without having to execute another system call. Once we wake up, we are in unarmed mode (as we just woke up). We thus do not need to call epoll_ctl to disable listening on the fd. We call read which returns the data. We leave the function either ready or not, but unarmed.

Total cost: 2+x.

We will have to face one spurious wakeup per fd, as the fd starts out armed. Our code will have to handle the case where epoll reports an fd for which no fiber is listening. Handling, in this case, just means ignore and move on. The FD will not be spuriously reported again.