Miek
Miek

Reputation: 1137

Adjusting for precision when casting a uint64_t to a double

I have a method that returns a string representation of time in the following format: _HH:MM:SS.sss the method takes an uint64_t time in microseconds as an arg. I can divide the hours and minutes out easily enough, but when it comes to the seconds, I need to cast to a double so I can get the fraction of seconds as well.

sample code:

std::string getDurationTime(uint64_t time){

  std::string returnTime="";

   uint64_t tempTime = time;
   uint64_t hours = (tempTime / 3600000000);
   tempTime -= (hours * 3600000000);

   uint64_t minutes = (tempTime / 60000000);
   tempTime -= (minutes * 60000000);

   double seconds = (double(tempTime) / 1000000);

   stringstream timeOut;

   timeOut<<std::setw(2)<<std::setfill('0')<<hours<<":"<<std::setw(2)<<std::setfill('0')<<minutes<<":"<<std::fixed<<std::setw(6)<<std::setprecision(3)<<std::setfill('0')<<seconds;

  returnTime = "_" + timeOut.str();

  return returnTime;

}

So a number like 103566 should return a value of _00:00:00:104, but instead it is returning _00:00:00:103. When I put a break point on seconds, the value is 0.103499999. Is there a standard technique for adjusting for this precision? If I add .000001 to the resulting seconds will it fix the problem every time for that precision?

any advice appreciated.

Upvotes: 1

Views: 1509

Answers (1)

Mats Petersson
Mats Petersson

Reputation: 129374

You could use seconds = round(seconds * 1000.0) / 1000.0; to round your number to thousands of a second.

Upvotes: 2

Related Questions