boost::gregorian::time to a unix timestamp as a double

1.9k views Asked by At

Brief background:

I am trying to plot Candlestick charts of Stocks, by using QCustomPlot version 1.3 beta. I skipped through the library's code, and found out that for the time-series, it uses a type-def (qcustomplot.h:line 3140)

typedef QMap<double, QCPFinancialData> QCPFinancialDataMap;

Where QCPFinancialData is (qcustomplot.h:line 3124)

class QCP_LIB_DECL QCPFinancialData
{
public:
  QCPFinancialData();
  QCPFinancialData(double key, double open, double high, double low, double close);
  double key, open, high, low, close;
};

So, the OHLC data is obviously there, and the class uses a key, which is used in the QMap, to index the Time-series entry.

So, the obvious key, would be the date-time (I am plotting End-Of-Day charts, so each entry is simply a date, no time used). In my Parsing code, I've used

boost::gregorian::date

Since it has quite a lot of advantages (conversion from string, calculation of date-time elapsed, etc).

Question is, should I go ahead and simply convert the boost::gregorian::date to a unix timestamp, and then record that timestamp as a double? I found a small template function on github that converts it to time_t type, but I guess double shouldn't be a problem in this case, or is this a potential bug? AFAIK, Unix time-stamp denotes the seconds since Jan 01 1970, which when represented as a double should be more than enough for a key?

In the examples of QCustomPlot, they use an accumulator/counter since the beginning of the time series sequence (e.g., starting date) rather than the time-stamp.

1

There are 1 answers

3
Dirk is no longer here On BEST ANSWER

A timestamp since the epoch can be stored quite conventiently in a double as you have enough space for the seconds since the epoch (ie Jan 1, 1970) and still have enough resolution for a little more than a microsecond.

Eg R does this:

R> now <- Sys.time()                    # get current date and time
R> now                                  # default format to test
[1] "2014-11-11 20:38:27.307228 CST"    # NB I have an option set for subsec.
R> as.integer(now)                      # as integer: seconds since epoch
[1] 1415759907
R> as.double(now)                       # as double under default precision
[1] 1415759907
R> print(as.double(now), digits=16)     # as double with forced higher prec.
[1] 1415759907.307228
R> 

I used these as double at the C/C++ layer all the time. And if I am not mistaken, you can get Boost to do the conversion for you.

Edit: I knew I had it somewhere:

boost::posix_time::ptime pt;        
// ... in what follows dt has subcomponents we can access
pt = boost::posix_time::ptime(boost::gregorian::date(dt.getYear(), 
                                                     dt.getMonth(), 
                                                     dt.getDay()), 
                              boost::posix_time::time_duration(dt.getHours(), 
                                                              dt.getMinutes(),
                                                              dt.getSeconds(), 
                                                 dt.getMicroSeconds()/1000.0));

A conversion to time_t including subseconds:

boost::posix_time::ptime dt = .... 


boost::posix_time::ptime epoch(boost::gregorian::date(1970,1,1)); 
boost::posix_time::time_duration x = dt - epoch;  // needs UTC to local corr.,
                                                  // but we get the fract. sec.
struct tm t = boost::posix_time::to_tm(dt);       // this helps with UTC conve.
time_t tt = mktime(&t) + 1.0 * x.fractional_seconds() / x.ticks_per_second()));