Reputation: 1217
I'm trying to use a time_point to effectively represent forever by setting it to seconds::max which, I believe, should represent that much time since epoch. When doing this, though, I get -1 as the time since epoch in the resulting time_point. What am I not understanding?
#include <iostream>
#include <chrono>
using namespace std;
using namespace std::chrono;
int main() {
auto tp1 = system_clock::time_point( seconds::zero() );
auto tp2 = system_clock::time_point( seconds::max() );
cout << "tp1: " << duration_cast<seconds>(tp1.time_since_epoch()).count() << endl;
cout << "tp2: " << duration_cast<seconds>(tp2.time_since_epoch()).count() << endl;
return 0;
}
The output running that is:
tp1: 0
tp2: -1
Upvotes: 4
Views: 2506
Reputation: 219345
Here's a little quick&dirty program to explore the limits of system_clock
time_point
s at different precisions:
#include <chrono>
#include <iostream>
using days = std::chrono::duration
<int, std::ratio_multiply<std::ratio<24>, std::chrono::hours::period>>;
using years = std::chrono::duration
<double, std::ratio_multiply<std::ratio<146097, 400>, days::period>>;
template <class Rep, class Period>
void
max_limit(std::chrono::duration<Rep, Period> d)
{
std::cout << "[" << Period::num << '/' << Period::den << "] ";
std::cout << years{d.max()}.count() + 1970 << '\n';
}
int
main()
{
using namespace std;
using namespace std::chrono;
max_limit(nanoseconds{});
max_limit(microseconds{});
max_limit(milliseconds{});
max_limit(seconds{});
}
This will output the year (in floating point) that time_point<system_clock, D>
will max out at for any duration D
. This program outputs:
[1/1000000000] 2262.28
[1/1000000] 294247
[1/1000] 2.92279e+08
[1/1] 2.92277e+11
Meaning system_clock
based on nanoseconds overflows in the year 2262. If you coarsen that to microseconds, you overflow in the year 294,247. And so on.
Once you coarsen to seconds, the max goes out to a ridiculous range. But when you convert that back to system_clock::time_point
, which is at least as fine as microseconds, and perhaps as fine as nanoseconds (depending on your platform), you just blow it out of the water.
To solve your problem I recommend:
auto M = system_clock::time_point::max();
Upvotes: 3
Reputation: 69912
Adding a few more diagnostics shows the issue (on my system):
#include <iostream>
#include <chrono>
using namespace std;
using namespace std::chrono;
int main() {
auto tp1 = system_clock::time_point( seconds::zero() );
auto tp2 = system_clock::time_point( seconds::max() );
using type = decltype(system_clock::time_point(seconds::zero()));
cout << type::duration::max().count() << endl;
cout << type::duration::period::den << endl;
cout << type::duration::period::num << endl;
cout << seconds::max().count() << endl;
cout << milliseconds::max().count() << endl;
cout << "tp1: " << duration_cast<seconds>(tp1.time_since_epoch()).count() << endl;
cout << "tp2: " << duration_cast<seconds>(tp2.time_since_epoch()).count() << endl;
return 0;
}
For me, the denominator value is 1,000,000 for the system_clock
's time_point
. Thus max seconds is going to overflow it when converted up.
Upvotes: 1