javadude
javadude

Reputation: 1813

Convert date to unix timestamp in postgresql

I have a table with a column abc carrying the unix timestamp (eg. 13898161481435) and I want to run a between dates select.

It would be not efficient to do a

where TO_CHAR(TO_TIMESTAMP(abc / 1000), 'DD/MM/YYYY') > '14/01/2014 00:00:00' and ..;

which would convert every record.

Rather do something like
where abc > ('14/01/2014 00:00:00' tobigint()) and abc < ...

But I cant find any reference, though for the reverse case.

Upvotes: 28

Views: 74761

Answers (4)

javadude
javadude

Reputation: 1813

1

select count(*) from cb.logs where to_timestamp(timestmp/1000) > timestamp '2014-01-15 00:00:00' and to_timestamp(timestmp/1000) < timestamp '2014-01-15 23:59:59';  
8600ms

"Aggregate  (cost=225390.52..225390.53 rows=1 width=0)"
"  ->  Seq Scan on logs  (cost=0.00..225370.34 rows=8073 width=0)"
"        Filter: ((to_timestamp(((timestmp / 1000))::double precision) > '2014-01-15 00:00:00'::timestamp without time zone) AND (to_timestamp(((timestmp / 1000))::double precision) < '2014-01-15 23:59:59'::timestamp without time zone))"

2

select count(*) from cb.logs where (timestmp > (select extract(epoch from timestamp '2014-01-15 00:00:00') * 1000) and timestmp < (select extract(epoch from timestamp '2014-01-15 23:59:59') * 1000));
1199ms
"Aggregate  (cost=209245.94..209245.95 rows=1 width=0)"
"  InitPlan 1 (returns $0)"
"    ->  Result  (cost=0.00..0.01 rows=1 width=0)"
"  InitPlan 2 (returns $1)"
"    ->  Result  (cost=0.00..0.01 rows=1 width=0)"
"  ->  Seq Scan on logs  (cost=0.00..209225.74 rows=8073 width=0)"
"        Filter: (((timestmp)::double precision > $0) AND ((timestmp)::double precision < $1))"

Upvotes: 2

javadude
javadude

Reputation: 1813

Interesting observation though, while

select count(*) from cb.logs where to_timestamp(timestmp/1000) > timestamp '2014-01-15 00:00:00' and to_timestamp(timestmp/1000) < timestamp '2014-01-15 23:59:59';

takes almost 10 seconds (my db with 1,5 mill records), the below only 1,5 sec

select count(*) from cb.logs where (timestmp > (select extract(epoch from timestamp '2014-01-15 00:00:00') * 1000) and timestmp < (select extract(epoch from timestamp '2014-01-15 23:59:59') * 1000));

and the below about 1sec

select count(*) from cb.logs where (timestmp > extract(epoch from timestamp '2014-01-15 00:00:00') * 1000) and (timestmp < extract(epoch from timestamp '2014-01-15 23:59:59') * 1000);

to count ~40.000 records

Most likely because the division I would say.

Upvotes: 3

Vignesh Kumar A
Vignesh Kumar A

Reputation: 28423

Try this

WHERE abc > extract(epoch from timestamp '2014-01-28 00:00:00')

PostgreSQL Docs

Upvotes: 43

harmic
harmic

Reputation: 30597

You do not need to convert it to char to compare it.

WHERE to_timestamp(abc/1000) > timestamp '2014-01-28 00:00:00'

I don't think that conversion would be very inefficient because timestamps are stored internally in a similar format to epoch secs (admittedly with a different origin and resolution).

If you really want to go the other way:

WHERE abc > extract(epoch from timestamp '2014-01-28 00:00:00')

Upvotes: 6

Related Questions