Andrei Ivanov
Andrei Ivanov

Reputation: 659

Cassandra python driver event_time

I'm having some trouble with converting the event_time that the python driver returns to a timestamp.

Basically, I need to store timestamps with some associated data and then query by time range. At the beginning, I didn't know you can put timestamps in the query, so I would convert them to a string this way:

datetime.utcfromtimestamp(timestamp).strftime('%Y-%m-%d %H:%M:%S')

Now, when I query the data from python (I convert the timestamps the same way as above and put them in the query) I get the datetime objects that I want to convert back to timestamps. I tried

(row.event_time - datetime.utcfromtimestamp(0)).total_seconds()

but this seems to mess up with timezones. Does this mean the datetime returned is not in UTC? I tried some other conversions I found online, but nothing works. Once it almost worked, but then when it got to the time around daylight savings switch I would get new one hour offsets.

So, my question is: what's the right way to work with timestamps between Python and Cassandra? Can't it be kept as simple as unix timestamps?

Upvotes: 0

Views: 1168

Answers (1)

Adam Holmberg
Adam Holmberg

Reputation: 7365

Without seeing your code and expected results I would offer the following:

If you insert with a string for the time in the query, the database will assume this is a local time and shift to UTC. If your string included a UTC offset (+0000), this would not happen (note that, because the datetime object you're formatting is timezone-naive, '%z' will not work).

If you prefer, you can let the driver handle encoding by using the parameterized forms:

session.execute("INSERT INTO <table> (key, time) VALUES (%s, %s)", (key, datetime.utcfromtimestamp(timestamp))

or

session.execute("INSERT INTO <table> (key, time) VALUES (%s, %s)", (key, long(timestamp * 1000)))  # scale to milliseconds

The values coming back from a query will be UTC, timezone-naive datetime objects.

Upvotes: 2

Related Questions