I have been using the following clock definition for a frame timer for years now:
using frame_clock = std::conditional_t<
std::chrono::high_resolution_clock::is_steady,
std::chrono::high_resolution_clock,
std::chrono::steady_clock>;
In other words, I want a clock that is defined using the highest possible resolution, but it must increment monotonically. Note that MSVC currently uses the following alias to define std::chrono::high_resolution_clock
:
using high_resolution_clock = steady_clock;
Therefore, on MSVC, the alias I have defined will just use std::chrono::steady_clock
. This is not necessarily true for libstdc++ and libc++, hence the use of the alias.
Recently, I stumbled across a footnote here: https://en.cppreference.com/w/cpp/chrono/high_resolution_clock
Notice that cppreference explicitly discourages the use of std::chrono::high_resolution_clock
. Their rationale was that the clock varies by implementation... but isn't this true for std::chrono::steady_clock
and std::chrono::system_clock
as well? For instance, I was unable to find anything that guaranteed that the clock periods between clocks must be in certain units. In fact, I understand that is by design.
My question is, after having used std::chrono::high_resolution_clock
for so many years (for frame timers and benchmarks), should I be more concerned than I am? Even here on this site, I see many recommendations to use std::chrono::high_resolution_clock
, despite what this footnote says. Any kind of further insight on this disparity, or examples of where this could cause problems would be much appreciated.
For practical purposes, you only have 3 choices:
std::system_clock
(if you want to stay inside C++, OS levels routines do exist)std::steady_clock
. There is no implementation out there which would have a steady clock with resolution higher than you get withstd::steady_clock