Consider the following code sample:
import R from 'ramda';
import {Observable} from 'Rx';
var allClicks_ = Observable.fromEvent(window, 'click').share();
var getClicks = function(klass) {
return allClicks_.filter(e => {
return R.contains(klass, e.target.classList);
});
};
getClicks('red').subscribe(x => {
render('RED: ' + x.target.className);
});
getClicks('blue').subscribe(x => {
render('BLUE: ' + x.target.className);
});
Instead of adding click event listeners to ".red" and ".blue", I added an event listener to window
and filtered events that are on ".red" and ".blue".
Now what can go wrong with code like this? Is it more (or less) efficient than adding event listeners to individual DOM nodes? Or it has no performance benefits?
Edit: Share the hot Observable so only one event handler is attached.
This is an example of a delegated event handler. This pattern is very useful. So useful in fact that libraries like
jQuery
anddojo
have builtin support for this pattern (see theselector
argument of jQuery.on and dojo.on).Adding event handlers to each DOM node is effectively an O(n) operation, whereas this is an O(1) operation. As the number of matching DOM nodes grows, the delegated event handler pattern realizes a bigger benefit.
What can go wrong?
If there is an event handler attached between your top-level element (
window
in this case) and your target elements, and that event handler does aev.stopPropagation()
, then your delegated handler will never see the event.If your filter function is overly complex and slow then the browser will have to spend more time than usual running the filter.
You'll get events for DOM nodes added after you add your event handlers. This is normally seen as a good thing. But if for some reason you weren't expecting it then it might throw you.
Note that in your particular example are actually registering two
click
handlers. You could reduce it to a single instance viashare
: