I have implemented a simple time syncing algorithm by calculating the offset from the master server. There is some lag in processing the code itself which would add up to timestamp. So I was wondering how will I test my algorithm is actually syncing timestamps between systems or not ?
The following is my logic for time sync in nodejs
var onSync = function (data) {
var diff = Date.now() - data.t1 + ((Date.now() - data.t0)/2);
offsets.unshift(diff);
if (offsets.length > 10)
offsets.pop();
console.log("Order no ",data.ord,"The offset is ",offsets[0] ,"time in server was = ",data.t1 , "time in the slave = ", Date.now() );
};
The systems communicate using socket.io . I have used a global var namely global in the server which gets updated every time it gets request and the global value is sent as data.ord to client side.
So currently, I have a single master server and multiple slave servers which keep pooling for timestamps. The following is the output I get :
Master Node:
rahul@g3ck0:~/programs/dos_homework/hw2$ nodejs ob1.js
Current server timestamp is 1395043602717 order no is 1
Current server timestamp is 1395043603263 order no is 2
Current server timestamp is 1395043603717 order no is 3
Current server timestamp is 1395043604264 order no is 4
Current server timestamp is 1395043604719 order no is 5
Current server timestamp is 1395043605265 order no is 6
Current server timestamp is 1395043605720 order no is 7
Current server timestamp is 1395043606267 order no is 8
Slave 1:
rahul@g3ck0:~/programs/dos_homework/hw2$ nodejs slave1.js
Order no 1 The offset is 2.5 time in server was = 1395043602718 time in the slave = 1395043602719
Order no 3 The offset is 2 time in server was = 1395043603717 time in the slave = 1395043603718
Order no 5 The offset is 1.5 time in server was = 1395043604719 time in the slave = 1395043604720
Order no 7 The offset is 0 time in server was = 1395043605720 time in the slave = 1395043605720
Slave 2:
rahul@g3ck0:~/programs/dos_homework/hw2$ nodejs slave2.js
Order no 2 The offset is 6 time in server was = 1395043603263 time in the slave = 1395043603268
Order no 4 The offset is 2.5 time in server was = 1395043604264 time in the slave = 1395043604265
Order no 6 The offset is 2 time in server was = 1395043605265 time in the slave = 1395043605266
Order no 8 The offset is 2 time in server was = 1395043606267 time in the slave = 1395043606268
As you can see
offset + timestamp(master) > timestamp(slave)
But this keeps decreasing over a period of time. In all, I am not sure, if this is the right way of doing it. I would love your inputs on 1. How to implement a better algorithm ? 2. How will I test it ?
You are unlikely to do better than getting your hosts to sync themselves with http://en.wikipedia.org/wiki/Network_Time_Protocol. Both Linux and Windows can be persuaded to sync up with NTP, and to act as NTP servers, if necessary (On Windows AD, one DC is a slightly oddly configured NTP server, so as to keep the other machines close enough for Kerberos authentication to work).
Typical computer clocks aren't very accurate. If you compare a system with sync on and sync off over a weekend you should find that there is noticeable drift with sync off but hopefully little or none with sync on. You could also start up machines significantly out of sync and see how long they take to sync up. By default NTP won't sync anything more than one hour adrift. I have found that irritating because we had problems with VMWare setups that left machines some hours adrift and sometimes we have machines come up in 1970, but I am prepared to believe there is a good reason for not syncing extreme differences, so I wouldn't aim to fix anything more than one hour out.