←back to thread

189 points GordonS | 1 comments | | HN request time: 0.204s | source
Show context
zzzeek ◴[] No.19533216[source]
I completely disagree with this advice:

> However, when you come across timestamp, you can’t possibly know in what time zone the timestamp is just by looking at it.

yes I do, it's in UTC, because I assume the application is written to correctly handle datetimes.

> It is merely a wall time. Maybe it’s UTC,

no, it's UTC.

> or maybe the developer stored it in local time.

right, maybe the application was not programmed correctly, that is always a possibility. But if they are using the timestamp with timezone datatype, it is 100% more likely they have made mistakes in doing so because it is much more confusing to have to fight against PostgreSQL's assumptions and implicit conversions, as the complexity of its behaviors and decisions using this datatype are described in this article. Does the application come up with the wrong answer if the server it runs on, or the webserver container, is mistakenly set to Eastern Timezone rather than UTC? Is the application ensuring that it knows the correct timezone of timestamps as they enter the application, and if so, why isn't it converting to UTC at that moment? Is the application otherwise receiving datetimes of arbitrary timezone, letting them move around business and display logic in their raw form, and waiting until they enter the database to be correctly normalized? that's too late.

A correctly coded application does explicit normalization of datetime values the moment they enter the boundaries of the application, just like it would also convert encoded bytes into a unicode-aware strings up front, so that within the application there is no ambiguity, both when persisting and when handling.

replies(7): >>19533282 #>>19533498 #>>19534065 #>>19534073 #>>19534203 #>>19534221 #>>19551427 #
grzm ◴[] No.19533282[source]
> "because I assume the application is written to correctly handle datetimes."

> "it is 100% more likely they have made mistakes in doing so because it is much more confusing to have to fight against PostgreSQL's assumptions and implicit conversions"

If you're assuming the application is correctly written to handle datetimes, you can safely assume it's also correctly using time zone aware date time data types, and using their database driver correctly, so there's no worry about PostgreSQL's assumptions or implicit conversions.

Datetime handling is hard. However, most of the issues you're talking about here are independent of whether you're storing instants as timestamp or timestamptz in PostgreSQL. Persistence is only one layer in an application stack.

As an aside, I haven't looked, but if someone hasn't already created a "Zoned Datetime" datatype extension for Postgres, it'd certainly be interesting to try.

replies(1): >>19534655 #
1. tofflos ◴[] No.19534655[source]
You've nailed the problem. Too many layers in the application stack are tampering with time. As the author of the application I now have to understand every layer and all their nuances. All the good intentions of the database authors, driver authors and library authors are making it worse. Just leave my time alone and let me deal with it in the application.