What's a good way to handle "catching up" on all the actions that should have happened while a game was closed?
Example: Tiny Towers. There are supplies, shops, customers: When a person closes the game and opens it the next day, the app has to figure out everything that happened.
We're trying to calculate this by simulating everything quickly, basically fast forwarding from when they left the game. Unfortunately this can take 20 seconds or longer.
Any suggestions for a better way to handle this?
This is a good but very open-ended question.
There are a variety of approaches, some probably better than others depending on your needs and ability and time to implement. I'm going to attempt to answer it (edit: VERY) naively as a non-game-developer.
Your suggestion is tantamount to simulating the game without user interaction by speeding up the timers involved. To implement this, you would need to determine a fundamental counter of discrete time passage, like a tick. When the game was resumed, you would do:
The counter
i
may not be necessary as a parameter - actually it shouldn't be, because it should probably be captured implicitly in the model of the game state, but the point is that you are literally catching up in this method.You might be able to save some time waiting by running this in the background as much as you can, but really, that's just spreading the cycles out rather than reducing them.
Another approach would be to indirectly simulate the game execution through approximations. This means that instead of you actually running through all the steps, you run through some divisor of that number, and "guesstimate" what happens in between.
So for instance, maybe the game state is necessarily random - you have a random number in there somewhere, and you've missed 1000000 ticks. Well, instead of doing all 1000000, do 1000, and use statistics to model what happens in each "1000-tick". For instance, we know if you flip a coin once or twice, you might see all heads, or all tails - but if you flip a coin a large number of times, the frequency of each approaches 0.5.
So if your game required that coin flip in each tick, it would not be reasonable necessarily to do one tick per "1000-tick," but rather, model the variance of 1000 coin flips and get a number that is between a tighter range, like 0.48 to 0.52. That's not tight enough, by the way, but you get the idea.
If you don't have randomness involved, you can also use math to get an exact or approximated-exact solution, provided that the system in question is not chaotic in nature. This would require a close study of the system.
In the case of Tiny Towers, my guess is that they optimize the game so that even when you are playing, the rate of computer accumulation of resources and other rates are not derived by simulation but rather with timers. So you have to build stuff to get resources according to rule, but the computer gets X gold per minute, and so on. It may not look like that because of the animations involved, but that could be what is happening.
To sum, first you need to understand the system that is being simulated.
If it is deterministic, then upon saving or exiting the game, the game should at least save all the relevant parameters (using something like a memento). Then either at that time or when you load, the game can use your analysis to do a forward-calculation or approximation. You will need to use an exact calculation if it is sensitive to initial conditions (chaotic).
If it is not deterministic, then upon saving/exiting you will also want to capture the variables involved, but your analysis and forward-calculation will need to use some statistical methods to reduce the order of magnitude of the number of ticks involved. Think of the coin flip analogy.
Example: Gold accumulation
Let's say the only state to restore is how much gold the computer has when you resume.
If the computer always gets 10 gold per second, then we could iterate over all the missing ticks...
And we would find at second 1001, at the start, that the computer has X + 10,000 gold.
Of course, this would be wasteful. Because multiplication is iterative addition, we could just do:
Now, if the accumulation of gold is random normally, then maybe while you play, there is some intrigue to that randomness. So maybe the rule is that each second, the computer can either get 10 gold or get 0 gold. This might make it more interesting for the player, as the computer can go on a "hot streak" and really challenge the player, or a cold streak, etc.
But there is no need for intrigue in a simulation, and so that could be statistically modeled based on the expectation of gold after N minutes. Of course, we'd take the average, and if we wanted, we could introduce a small range of randomness in there that would be reasonably realistic.
For instance, let's say you said "let's model this based on what would happen in 99.7% of scenarios."
According to the trusty 68-95-99.7 rule, 3 standard deviations is the tightness required to model this.
So, we have:
sqrt( 1000 * 0.5 * (1-0.5)
approximately 16X + (1000*0.5*10) + RANDOM(-1,1)*48*10
In the worst case, the computer has X + 5480 gold.
Note that there is a huge discrepancy between the theoretical limits (0 and 10000) and the band of possibilities in 99.7% of scenarios (4520 and 5480). This is the power of statistics.
This can be exploited to optimize another run loop that isn't so predictable. For instance, you may have to do an intricate simulation if the gold reaches 6000. By limiting the scenarios to the 99.7%, you eliminate the need to even think about it, because the max possible value is less than the threshold.
Now, note that complex systems don't always work like this. So say the rule is: the computer gets 1 gold per second, but the first time the sum of the total gold hits a prime number, the computer loses all of its gold.
Like: 1, 0, 1, 2, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 5, 6, 0, 1, ...
Now you have to pull out the prime number theorem and potentially factor numbers, and do all sorts of stuff. So really, after 1000 seconds, if you have, say, 10 gold, and the computer has 100 gold, and it takes 200 gold to win, should we bother modeling?
There's of course an approach to that, but the point I am clumsily making is that the complexity of this relatively simple system makes a purely computational approach infeasible - and perhaps a statistical one as well.
Chunking
Something I have not mentioned is the idea of chunking the work to be done in batches to do what is necessary to achieve user interactivity first, and then to make everything hunky dory later.
So, let's say we are making a game in which the computer's position is important. Then we might start by asking the question: is the computer going to be visible anytime soon?
If not, then we can go ahead and render the scene without worrying about whether the computer is visible. Then once the scene is rendered, the focus of the work can shift to the question of where exactly the computer is.
Here is one naive thought-process:
The game may do the following:
Then: