We have an application which runs every 5-10 minutes, and the job of the application is to use latest data to update in-memory state for objects of class Foo
.
The class state can be represented as:
Class Foo {
int foo1;
double foo2;
bar foo3;
double foo4;
...
...
int fooN;
}
So for the n'th
run of this application which gets latest data bazObj(n)
of class Baz
, Foo(n) = f(Foo(n-1), Baz(n))
The tricky thing here is that there are lot of interdpendencies in the computation. For example, the computation of foo3
depends on foo1
being already updated, foo4
computation depends on foo2
and foo3
being already updated and so on.
Our current design is a simple sequential flow which implicitly has all these dependencies that aren't captured or enforced anywhere, making the code hard to maintain.
What would be good ways to structure the computation of Foo(n)
to make the code flow intuitive and easy to understand/maintain (using C++)? Any pointers to relevant design patterns would also be helpful.
Your goal seems to be to compute each object after the objects it depends on have been computed. It sounds like you have been doing this by hand in your code and finding it to be bug prone.
Here is a simple approach. It can be greatly improved by sorting based on dependencies, but it will get the job done.
Doing this will make you code simpler because you can simply add/modify your objects and their immediate dependencies without worrying about where to add them in the list. The above algorithm will make sure they are computed in the proper order.
This is not a particularly fast method--it's brute force, but easy. It can be significantly improved.
You can record the order that you computed the objects and reuse it in a subsequent pass (assuming nothing has changed).