←back to thread

189 points GordonS | 2 comments | | HN request time: 0.414s | source
Show context
zzzeek ◴[] No.19533216[source]
I completely disagree with this advice:

> However, when you come across timestamp, you can’t possibly know in what time zone the timestamp is just by looking at it.

yes I do, it's in UTC, because I assume the application is written to correctly handle datetimes.

> It is merely a wall time. Maybe it’s UTC,

no, it's UTC.

> or maybe the developer stored it in local time.

right, maybe the application was not programmed correctly, that is always a possibility. But if they are using the timestamp with timezone datatype, it is 100% more likely they have made mistakes in doing so because it is much more confusing to have to fight against PostgreSQL's assumptions and implicit conversions, as the complexity of its behaviors and decisions using this datatype are described in this article. Does the application come up with the wrong answer if the server it runs on, or the webserver container, is mistakenly set to Eastern Timezone rather than UTC? Is the application ensuring that it knows the correct timezone of timestamps as they enter the application, and if so, why isn't it converting to UTC at that moment? Is the application otherwise receiving datetimes of arbitrary timezone, letting them move around business and display logic in their raw form, and waiting until they enter the database to be correctly normalized? that's too late.

A correctly coded application does explicit normalization of datetime values the moment they enter the boundaries of the application, just like it would also convert encoded bytes into a unicode-aware strings up front, so that within the application there is no ambiguity, both when persisting and when handling.

replies(7): >>19533282 #>>19533498 #>>19534065 #>>19534073 #>>19534203 #>>19534221 #>>19551427 #
welder ◴[] No.19534065[source]
Edit: I'm being downvoted, but I know more than you about timestamps. Check my bio before downvoting.

zzzeek's comment is correct, he should be upvoted not down.

> A correctly coded application does explicit normalization of datetime values the moment they enter the boundaries of the application, just like it would also convert encoded bytes into a unicode-aware strings up front, so that within the application there is no ambiguity, both when persisting and when handling.

This is 100% true, about handling datetimes and unicode. I built a time tracking application which has proved this true many times.

> I completely disagree with this advice

Me too, the article op clearly doesn't know what he's talking about.

replies(2): >>19534406 #>>19535306 #
mypalmike ◴[] No.19534406[source]
> A correctly coded application does explicit normalization of datetime values the moment they enter the boundaries of the application... within the application there is no ambiguity

> This is 100% true... I built a time tracking application which has proved this true many times.

The potential for timezone rule changes mean that future events specified in local timezones are fundamentally ambiguous with regard to any absolute timeframe. Thus, normalization of such events to UTC can result in errors.

Consider TicketMaster. They have a database of future events around the globe, sometimes years in advance.

Imagine a band scheduled to play at 24-Jul-2020 at 8pm in Tokyo. Japan's government was considering reintroducing daylight savings time in 2020 so as to reduce heat stress on the 2020 Olympics athletes. In August 2018, they decided against it, but it could have happened. Let's pretend it did happen.

If you had stored this band's gig in a database a few months ago (before the decision to make the change), your UTC-normalized timestamp would be 2020-07-24T11:00Z (UTC+9). But the gig is going to happen at 8pm local time regardless of the rule change. The actual event time is 2020-07-24T10:00Z (UTC+10), an hour earlier than originally scheduled.

Normalizing to UTC makes sense in so many cases that it can seem to be a silver bullet for all cases. But there are definitely business cases that do not work well with this approach.

replies(3): >>19534465 #>>19534843 #>>19535755 #
1. welder ◴[] No.19534843[source]
Yes, I forgot to make an exception for scheduling future local times, so use utc except for future local datetimes.
replies(1): >>19534966 #
2. jasode ◴[] No.19534966[source]
I didn't downvote any of your comments but "In this case [...] store the time zone string" -- seems to contradict your earlier advice of "Always use UTC without time zone".

The issue is that there are 2 types of future times:

(1) scientific-absolute times -- like future solar eclipses; you can normalize these types of times to a future UTC with no ambiguity (other than esoteric adjustments such as TAI leap seconds)

(2) cultural-relative-wall-clock times -- like future recurring appointments, future New Year's midnight celebrations, etc.; These can't be normalized to UTC perfectly because one doesn't know what the exact future UTC time will be due to unknown government changes to DST rules or TZ boundaries. If you simply normalize to UTC, you will throw away information and it will cause bugs. Your eventual addendum of "store the time zone string" basically acknowledges this edge case for future datetimes.