React Flux dispatcher vs Node.js EventEmitter - scalable?

3.2k views Asked by At

When you use Node's EventEmitter, you subscribe to a single event. Your callback is only executed when that specific event is fired up:

eventBus.on('some-event', function(data){
   // data is specific to 'some-event'
});

In Flux, you register your store with the dispatcher, then your store gets called when every single event is dispatched. It is the job of the store to filter through every event it gets, and determine if the event is important to the store:

eventBus.register(function(data){
   switch(data.type){
      case 'some-event':
            // now data is specific to 'some-event'
         break;
   }
});

In this video, the presenter says:

"Stores subscribe to actions. Actually, all stores receive all actions, and that's what keeps it scalable."

Question

Why and how is sending every action to every store [presumably] more scalable than only sending actions to specific stores?

2

There are 2 answers

2
Michelle Tilley On BEST ANSWER

The scalability referred to here is more about scaling the codebase than scaling in terms of how fast the software is. Data in flux systems is easy to trace because every store is registered to every action, and the actions define every app-wide event that can happen in the system. Each store can determine how it needs to update itself in response to each action, without the programmer needing to decide which stores to wire up to which actions, and in most cases, you can change or read the code for a store without needing to worrying about how it affects any other store.

At some point the programmer will need to register the store. The store is very specific to the data it'll receive from the event. How exactly is looking up the data inside the store better than registering for a specific event, and having the store always expect the data it needs/cares about?

The actions in the system represent the things that can happen in a system, along with the relevant data for that event. For example:

  • A user logged in; comes with user profile
  • A user added a comment; comes with comment data, item ID it was added to
  • A user updated a post; comes with the post data

So, you can think about actions as the database of things the stores can know about. Any time an action is dispatched, it's sent to each store. So, at any given time, you only need to think about your data mutations a single store + action at a time.

For instance, when a post is updated, you might have a PostStore that watches for the POST_UPDATED action, and when it sees it, it will update its internal state to store off the new post. This is completely separate from any other store which may also care about the POST_UPDATED event—any other programmer from any other team working on the app can make that decision separately, with the knowledge that they are able to hook into any action in the database of actions that may take place.

Another reason this is useful and scalable in terms of the codebase is inversion of control; each store decides what actions it cares about and how to respond to each action; all the data logic is centralized in that store. This is in contrast to a pattern like MVC, where a controller is explicitly set up to call mutation methods on models, and one or more other controllers may also be calling mutation methods on the same models at the same time (or different times); the data update logic is spread through the system, and understanding the data flow requires understanding each place the model might update.

Finally, another thing to keep in mind is that registering vs. not registering is sort of a matter of semantics; it's trivial to abstract away the fact that the store receives all actions. For example, in Fluxxor, the stores have a method called bindActions that binds specific actions to specific callbacks:

this.bindActions(
  "FIRST_ACTION_TYPE", this.handleFirstActionType,
  "OTHER_ACTION_TYPE", this.handleOtherActionType
);

Even though the store receives all actions, under the hood it looks up the action type in an internal map and calls the appropriate callback on the store.

0
tonypee On

Ive been asking myself the same question, and cant see technically how registering adds much, beyond simplification. I will pose my understanding of the system so that hopefully if i am wrong, i can be corrected.

TLDR; EventEmitter and Dispatcher serve similar purposes (pub/sub) but focus their efforts on different features. Specifically, the 'waitFor' functionality (which allows one event handler to ensure that a different one has already been called) is not available with EventEmitter. Dispatcher has focussed its efforts on the 'waitFor' feature.


The final result of the system is to communicate to the stores that an action has happened. Whether the store 'subscribes to all events, then filters' or 'subscribes a specific event' (filtering at the dispatcher). Should not affect the final result. Data is transferred in your application. (handler always only switches on event type and processes, eg. it doesn't want to operate on ALL events)

As you said "At some point the programmer will need to register the store.". It is just a question of fidelity of subscription. I don't think that a change in fidelity has any affect on 'inversion of control' for instance.

The added (killer) feature in facebook's Dispatcher is it's ability to 'waitFor' a different store, to handle the event first. The question is, does this feature require that each store has only one event handler?

Let's look at the process. When you dispatch an action on the Dispatcher, it (omitting some details):

  • iterates all registered subscribers (to the dispatcher)
  • calls the registered callback (one per stores)
  • the callback can call 'waitfor()', and pass a 'dispatchId'. This internally references the callback of registered by a different store. This is executed synchronously, causing the other store to receive the action and be updated first. This requires that the 'waitFor()' is called before your code which handles the action.
  • The callback called by 'waitFor' switches on action type to execute the correct code.
  • the callback can now run its code, knowing that its dependancies (other stores) have already been updated.
  • the callback switches on the action 'type' to execute the correct code.

This seems a very simple way to allow event dependancies.

Basically all callbacks are eventually called, but in a specific order. And then switch to only execute specific code. So, it is as if we only triggered a handler for the 'add-item' event on the each store, in the correct order.

If subscriptions where at a callback level (not 'store' level), would this still be possible? It would mean:

  • Each store would register multiple callbacks to specific events, keeping reference to their 'dispatchTokens' (same as currently)
  • Each callback would have its own 'dispatchToken'
  • The user would still 'waitFor' a specific callback, but be a specific handler for a specific store
  • The dispatcher would then only need to dispatch to callbacks of a specific action, in the same order

Possibly, the smart people at facebook have figured out that this would actually be less performant to add the complexity of individual callbacks, or possibly it is not a priority.