I am recently bumping my head trying to get around this problem.
single server scenario
You have a client that is occasionally connected to a single server.
I recently watched greg youngs video on occasionally connected systems on skillsmatter.com again.
There he states that the best way to deal with it is:
- You have a client using an event stream.
- the client persists all commands it has executed in a queue
- When the server is available again, the client pushes all commands and downloads the resulting events again (locally, the client deletes the events that were emitted locally, as the server is the single source of truth)
Now this is all fine, but I have a somewhat more complicated scenario - one with two servers: Test and Production
mutli server secnario
- So the client is used to create some sort of template
- it then connects to the Test server and pushes its commands => we've got a new template version on the Test server
- From time to time, the client reconnects to the test server and changes the template
- At one point, however, that tempalte is to be published to the Production server.
You can also think of this like a git merge scenario, where you'd like to push all of your changes from your forked remote repository A to the original remote repository B.
The problem is:
- Once the client pushes its changes to the Test environment, the commands are gone - so how could it push any changes to the Production environment...
- Also: another user may be used to actually publish the new template to Production. (e.g. because publishing is done by an it guy and not the developer)
So there is basically two solutions to this problem I guess: a) Persist the command stream and download it to the client as well - but this would conflict with the fact that in Event Sourcing systems, only events are stored b) Have a mechanism to re-create commands from already committed events. That way, the client could look at the templates version on Production to see which events have not been emitted there yet. Then it creates the commands accordingly and executes them.
My questions:
- I guess only b is a viable option here?
- Does option b violate any rule/principle of event sourcing and/or cqrs?
- Is there any other and even better way to do this?
Thank you guys for your thoughts!
UPDATE
Thank you for sharing your thoughts. But it seems I have to clarify things a bit - the thing with Test and Production.
Lets assume that we are building some application framework like salesforce: You can - by using point and click - define your application with its entities and workflows and such. Of course you would do this in a separate sandbox environment. When you are finished with your first application version, you will want to move that to a production server, so your users can actually use the application you "built".
Now consider the following: In production, you realize, that there is a tiny mistake and you fix it right away. Then you'd like to transfer those changes back to the test environment.
So now the question is: what the heck is the source of truth?
I like to complare it to git as most devs know it. Basically I can see that I have two options:
- git rebase
One of the environments is the book of record. So in that case, I'd have to collect all commands and once I am updating production, I push the commands. This is like a git rebase
- git merge
Lets assume that both systems are the "book of record". In that case, I need to synchronize the events that happened in the test environment and those that happened in the production environment (so I get the very same application definition in Test and Production) So the events from Test are not just appended to the eventstream of production, but sorted in the productions event stream in the order they actually happened. This is like a git merge
I personally prefer the git merge option as it results in the very same event order in both systems. And of course, this would allow the ocassionally connected clients to use the very same approach for distributed collaboration: Assume that the application we defined in Test and published to Production using this Event Sourcing approach is then used by multiple occasionally connected clients to actually collect data. Now multiple clients could work together on the very same aggregate root sync with each other and the server (like in a peer2peer system) while still keeping the same event stream order.
The problem could be, however, that a client syncs an event e1 to the server, but the aggregateroot that this event corresponds to, has already handled an event e2 which happened later. So event handling would be out of order.
So my question: Is there any significant downside in this "git merge"?
I think you are tangled in your situation by your definition of the book of record / truth.
If production is the book of record, then that's where the commands should be sent. In this case, the test instance is similar in several respects to the client instance; it becomes a sandbox where you can try things out, but it may not actually represent "truth" when it comes to push things onto the next stage. So you keep the commands queued, and eventually deliver those commands to your production instance.
If test is the book of record for these changes, then you aren't sharing commands with production, but are sharing events (or possibly projections, depending on which model is a better fit for your actual use case). This is somewhat analogous to using microservices: test is the microservice that supports the commands sent to it, and production is a separate microservice that reacts to events from test -- in other words, production depends on the read model in test.
A review what Udi Dahan has to say about services may help clarify things
My guess, based on your use of "test" and "production", is that the production system doesn't have the authority to change the data that is being pushed to test; it just consumes a view of that data. Which puts you squarely back into the single server (really: single book of record) use case.