Most active commenters
  • welder(7)
  • zzzeek(6)
  • jeltz(6)
  • niftich(4)
  • GordonS(3)
  • (3)
  • colanderman(3)
  • twic(3)

189 points GordonS | 90 comments | | HN request time: 1.994s | source | bottom
1. cpeterso ◴[] No.19532799[source]
What timestamp types are available in MySQL, SQL Server, and Oracle?
replies(5): >>19532853 #>>19532902 #>>19533224 #>>19533575 #>>19533819 #
2. tyingq ◴[] No.19532853[source]
Be careful with the MySQL ones. A couple of them overflow past 1-1-2038, which would break some use cases now. https://bugs.mysql.com/bug.php?id=12654
3. ◴[] No.19532902[source]
4. mjevans ◴[] No.19532963[source]
I think I slightly disagree about the choice of using timestamptz over timestamp. The main reason being that when I feed data in to the database I always want that same data back out. From the description I now realize that if I backed up and restored, or otherwise ran the same database with a different configuration the database might "try to be smart" and return a modified value.

I guess to store a timestamp and a timezone I should use a timestamp a second column with the timezone value (maybe a string, but I'd like a clean way of telling the database I want an 'enum' but to dynamically just add new values as it sees them; yes for PostgreSQL I'd do that with a secondary lookup table and an insert/update trigger).

replies(2): >>19533076 #>>19535839 #
5. minitech ◴[] No.19533076[source]
It would return an equivalent value, which is the important thing. You can specify `AT TIME ZONE 'UTC'` (or whichever timezone) when consuming it to get a consistent look. On the other hand, it’s not clear from just the value what point in time a `timestamp without time zone` represents. That’s why I agree that it should be used when you don’t want to represent a point in time.
replies(1): >>19533511 #
6. zeroimpl ◴[] No.19533112[source]
I sometimes wish there was an actual "TIMESTAMP WITH TIME ZONE" type that stored the input timestamp as well. Basically, a version that didn't canonicalize the input by converting to UTC. This way, I can easily show timestamps to the user in the time zone they used to input them.
replies(6): >>19533287 #>>19534038 #>>19534057 #>>19534316 #>>19535066 #>>19541872 #
7. jasonl99 ◴[] No.19533164[source]
That is a phenomenal article about timezones in general that apply to any database, as well as a great explanation of features specific to postgres.

It is interesting that a timestamp without timezone is internally represented a timestamptz at UTC.

Postgres is also has some great functionality converting between timezones:

https://www.postgresql.org/docs/11/functions-datetime.html#F...

8. zzzeek ◴[] No.19533216[source]
I completely disagree with this advice:

> However, when you come across timestamp, you can’t possibly know in what time zone the timestamp is just by looking at it.

yes I do, it's in UTC, because I assume the application is written to correctly handle datetimes.

> It is merely a wall time. Maybe it’s UTC,

no, it's UTC.

> or maybe the developer stored it in local time.

right, maybe the application was not programmed correctly, that is always a possibility. But if they are using the timestamp with timezone datatype, it is 100% more likely they have made mistakes in doing so because it is much more confusing to have to fight against PostgreSQL's assumptions and implicit conversions, as the complexity of its behaviors and decisions using this datatype are described in this article. Does the application come up with the wrong answer if the server it runs on, or the webserver container, is mistakenly set to Eastern Timezone rather than UTC? Is the application ensuring that it knows the correct timezone of timestamps as they enter the application, and if so, why isn't it converting to UTC at that moment? Is the application otherwise receiving datetimes of arbitrary timezone, letting them move around business and display logic in their raw form, and waiting until they enter the database to be correctly normalized? that's too late.

A correctly coded application does explicit normalization of datetime values the moment they enter the boundaries of the application, just like it would also convert encoded bytes into a unicode-aware strings up front, so that within the application there is no ambiguity, both when persisting and when handling.

replies(7): >>19533282 #>>19533498 #>>19534065 #>>19534073 #>>19534203 #>>19534221 #>>19551427 #
9. emmelaich ◴[] No.19533224[source]
Dunno but sqlite3 assumes times are in utc.
10. grzm ◴[] No.19533282[source]
> "because I assume the application is written to correctly handle datetimes."

> "it is 100% more likely they have made mistakes in doing so because it is much more confusing to have to fight against PostgreSQL's assumptions and implicit conversions"

If you're assuming the application is correctly written to handle datetimes, you can safely assume it's also correctly using time zone aware date time data types, and using their database driver correctly, so there's no worry about PostgreSQL's assumptions or implicit conversions.

Datetime handling is hard. However, most of the issues you're talking about here are independent of whether you're storing instants as timestamp or timestamptz in PostgreSQL. Persistence is only one layer in an application stack.

As an aside, I haven't looked, but if someone hasn't already created a "Zoned Datetime" datatype extension for Postgres, it'd certainly be interesting to try.

replies(1): >>19534655 #
11. niftich ◴[] No.19533287[source]
I frequently wish that datetime libraries and APIs frozen in time and awkward design were given a second pass and redone in the style of Java 8 [1], which is a triumph of clarity after many years of lessons learned. Even putting out a new RFC would help move things along, because I've long felt that RFC 3339 is underspecified [2].

If datetime libraries were more thoroughly patterned after that of Java 8, there would be less need for the 'misconceptions programmers believe about datetimes' posts, and less need for the 'just use UTC' or 'maybe always-UTC isn't the right advice' posts, because the data-types are thoroughly reflective of their intended use, and their misuse is harder than in other APIs.

[1] https://docs.oracle.com/javase/8/docs/api/java/time/package-... [2] https://news.ycombinator.com/item?id=12364805

replies(3): >>19534597 #>>19535194 #>>19535410 #
12. robbiep ◴[] No.19533377[source]
I have a question, that was partially addressed in some comments in the last Timezone in Databases article earlier in the week -

What do people think about using Unix time in databases?

replies(4): >>19533555 #>>19533627 #>>19533651 #>>19535478 #
13. colanderman ◴[] No.19533498[source]
Your assumptions are wrong. PostgreSQL's time functions provide exactly the semantics described in the article. If you pretend the two types work the other way around, you will get incorrect results when using e.g. `AT TIME ZONE`.
replies(1): >>19534151 #
14. colanderman ◴[] No.19533511{3}[source]
Moreover, `AT TIME ZONE` only works correctly when used following the semantics described in the article. It will do the opposite of what you think it should if you use `timestamp without time zone` to represent an absolute time, or vice-versa.
15. paulryanrogers ◴[] No.19533555[source]
Directly using Unix timestamps is unlikely to save much space since most internal representations are likely similar. They're also difficult for mortals to read and change in human-significant ways (like 3 calendar days ago).
16. userbinator ◴[] No.19533575[source]
SQL Server has datetimeoffset, which computes in UTC but also stores the timezone.
replies(1): >>19534190 #
17. TheAceOfHearts ◴[] No.19533627[source]
It depends on your use-case. You should also be aware of the Year 2038 problem [0].

What are you trying to achieve?

[0] https://en.wikipedia.org/wiki/Year_2038_problem

replies(1): >>19533747 #
18. niftich ◴[] No.19533651[source]
I feel that storing unix epoch seconds in a >=4-byte fixed length numeric or arbitrary precision numeric column is a reasonable choice if:

1) the record naturally has the semantics of a timestamp that is chiefly used for comparisons between values (e.g. to sort, min, max, subtract to get an approximate interval) and whose conversions to human-meaningful calendrical datetime are not needed or only need to be approximate (given unix timestamps' lack of leap seconds),

AND 2) you've thought about overflow (of both unix time and your column), signed-ness, sentinel and special values, and NULL handling,

AND 3) you're not enthused by the ANSI SQL definitions of TIMESTAMP types or think you, your RDBMS, your database driver, your programming language, your datetime library, or the code's next maintainer might screw something up.

If only the first factor isn't true, use an RFC 3339 UTC timestamp in a string column instead. If only the second factor isn't true, use one of the SQL TIMESTAMP types instead, so you can pick up a better-defined suite of behaviors.

19. robbiep ◴[] No.19533747{3}[source]
Largely timestamping. Accuracy to a high level (leap seconds) is irrelevant. Aware of 2038 problem, assume we will have some fun along the way to there along with the rest of the unix world!
replies(1): >>19534097 #
20. niftich ◴[] No.19533819[source]
Oracle has 'DATE', 'TIMESTAMP', 'TIMESTAMP WITH TIME ZONE', and 'TIMESTAMP WITH LOCAL TIME ZONE'.

Out of these, 'TIMESTAMP' and 'TIMESTAMP WITH TIME ZONE' do exactly what you'd expect without needing a big tutorial.

'DATE' is a 'TIMESTAMP' that can't store fractional seconds.

And 'TIMESTAMP WITH LOCAL TIMEZONE' is an unmitigated footgun that you'll probably misuse.

21. andynrkri ◴[] No.19533844[source]
Thanks for this. Was very helpful!
22. sudhirj ◴[] No.19533918[source]
Doesn’t this mean that the default Rails treatment of time stamps (created and updated at) in Postgres is wrong? Any plans to change that?
23. welder ◴[] No.19534038[source]
Edit: I'm being downvoted, but I know more than you about timestamps. Check my bio before downvoting.

You want a column that can store both timestamp and timezone, so why not just have 2 columns? You already have to convert the timestamp into the user's timezone when rendering because the user's timezone can change. Storing timestamps with timezone just complicates things, for ex:

display_in_user_zone(TIMESTAMP UTC)

display_in_user_zone(TIMESTAMP_WITH_TIMEZONE)

The above is exactly the same, but the UTC one will prevent bugs in the long run.

The only time it's acceptable to store time with timezone is with DATES that do not have a TIME component.

If you really need to display the timestamp in the timezone the user originally entered, store the original timezone along with every timestamp.

replies(3): >>19534517 #>>19534530 #>>19535511 #
24. Figs ◴[] No.19534057[source]
We needed exactly that functionality at work a few years ago, so, of course, "TIMESTAMP WITH TIME ZONE" got picked for the column in the DB. After a few weeks of people entering data, DST switched over. Eventually someone noticed all the old dates were all off by an hour...

It really should have been called "TIMESTAMP IN UTC" or "NORMALIZED TIMESTAMP" or something else instead. It's especially dangerous because the results you get back from using it look correct with casual testing and you might not notice that your database is subtly wrong for a long time.

replies(2): >>19534353 #>>19535060 #
25. welder ◴[] No.19534065[source]
Edit: I'm being downvoted, but I know more than you about timestamps. Check my bio before downvoting.

zzzeek's comment is correct, he should be upvoted not down.

> A correctly coded application does explicit normalization of datetime values the moment they enter the boundaries of the application, just like it would also convert encoded bytes into a unicode-aware strings up front, so that within the application there is no ambiguity, both when persisting and when handling.

This is 100% true, about handling datetimes and unicode. I built a time tracking application which has proved this true many times.

> I completely disagree with this advice

Me too, the article op clearly doesn't know what he's talking about.

replies(2): >>19534406 #>>19535306 #
26. djrobstep ◴[] No.19534073[source]
Why is this correct take being downvoted?

Add consistency to your codebase by saving all timestamps in UTC.

Edited: Storage size is the same.

replies(2): >>19534077 #>>19535382 #
27. msla ◴[] No.19534075[source]
I'd be happy with everyone being extremely clear about the two kinds of time: Wallclock-time and duration-time.

What's the difference? Duration-time does not have leap-seconds.

I'll say it again: Duration-time does not have leap-seconds.

Once more: Duration-time does not have leap-seconds.

For duration-time without leap-seconds, give me TAI:

> International Atomic Time (TAI) is a statistical atomic time scale based on a large number of clocks operating at standards laboratories around the world that is maintained by the Bureau International des Poids et Mesures; its unit interval is exactly one SI second at sea level. The origin of TAI is such that UT1-TAI is approximately 0 (zero) on January 1, 1958. TAI is not adjusted for leap seconds. It is recommended by the BIPM that systems which cannot handle leapseconds use TAI instead.

https://tycho.usno.navy.mil/leapsec.html

Document that it is TAI.

Include in the documentation that TAI does not include leap-seconds.

Thank you.

replies(2): >>19534286 #>>19535401 #
28. grzm ◴[] No.19534077{3}[source]
There's no difference in storage between timestamp and timestamptz in PostgreSQL. Both are 8 bytes.
29. TheAceOfHearts ◴[] No.19534097{4}[source]
Storing Unix timestamps in a 64-bit signed integer field would probably be fine in that situation. The only downsides that come to mind are that you'd have to manually convert the number to an actual date when using it, and that you'd lose easy access to date-related functionality in the database.
replies(1): >>19535461 #
30. welder ◴[] No.19534121[source]
> A Valid Use Case For timestamp without time zone

This should be "A Valid Use Case For timestamp with time zone".

This article gets it all wrong. The only time it's acceptable to store time with time zone is DATES that do not have a TIME component.

replies(1): >>19534300 #
31. welder ◴[] No.19534151{3}[source]
No, he's right. Before you downvote me, check my bio for my experience.

Literally everything in zzzeek's comment I've personally seen causing live bugs in production code. Always use UTC without time zone and always convert to UTC as soon as possible in your application.

> Your assumptions are wrong.

zzzeek isn't assuming, he's speaking from experience. He wrote the SQLAlchemy library, so he knows what he's talking about.

replies(1): >>19535277 #
32. crooked-v ◴[] No.19534190{3}[source]
Does it store the time zone, or does it store the offset?
replies(1): >>19535099 #
33. hathawsh ◴[] No.19534203[source]
Thank you for chiming in. In my SQLAlchemy+Postgres app I've chosen to store datetimes as timestamp without timezone columns where the time zone is always UTC. I'm quite happy with that choice. The last thing I want is for Postgres to start implicitly adjusting timestamps. By making the timestamps non-timezone-aware, I'm telling Postgres that if I ever want it to do any time zone conversion, I'll ask for the conversion explicitly, never implicitly. Explicit time zone conversion works quite nicely in Postgres. I want no magic.
34. alboy ◴[] No.19534221[source]
> yes I do, it's in UTC, because I assume the application is written to correctly handle datetimes.

This seems to assume that only past events may be stored in a database. How would one schedule events at some clock-on-the-wall point in the future when either the exact applicable time zone is unknown or the mapping of a particular time zone to UTC isn't guaranteed to remain unchanged?

replies(1): >>19535665 #
35. ◴[] No.19534286[source]
36. groestl ◴[] No.19534300[source]
I'm probably just misunderstanding, but a calendar like Google Calendar definitely needs to store time with a time zone (i.e. dates with a time component). This allows to keep a scheduled event at 9 AM, even if the time zone offset is redefined (due to a political event).
replies(2): >>19534405 #>>19535029 #
37. mastazi ◴[] No.19534316[source]
When you say "timestamp", many programmers think about UNIX timestamps which, by definition, cannot have a timezone, so to those people a "timestamp with TZ" is a bit of an oxymoron. But I understand where you're coming from.
38. StreamBright ◴[] No.19534353{3}[source]
Yep, I was wondering if you could create a postgres type that does exactly this. timestamp_utc and timestamp_local. That would make it easier to understand what is going on.
39. alboy ◴[] No.19534405{3}[source]
Or some of the schedulees of the events move across time zones. There're absolutely other use cases.
40. mypalmike ◴[] No.19534406{3}[source]
> A correctly coded application does explicit normalization of datetime values the moment they enter the boundaries of the application... within the application there is no ambiguity

> This is 100% true... I built a time tracking application which has proved this true many times.

The potential for timezone rule changes mean that future events specified in local timezones are fundamentally ambiguous with regard to any absolute timeframe. Thus, normalization of such events to UTC can result in errors.

Consider TicketMaster. They have a database of future events around the globe, sometimes years in advance.

Imagine a band scheduled to play at 24-Jul-2020 at 8pm in Tokyo. Japan's government was considering reintroducing daylight savings time in 2020 so as to reduce heat stress on the 2020 Olympics athletes. In August 2018, they decided against it, but it could have happened. Let's pretend it did happen.

If you had stored this band's gig in a database a few months ago (before the decision to make the change), your UTC-normalized timestamp would be 2020-07-24T11:00Z (UTC+9). But the gig is going to happen at 8pm local time regardless of the rule change. The actual event time is 2020-07-24T10:00Z (UTC+10), an hour earlier than originally scheduled.

Normalizing to UTC makes sense in so many cases that it can seem to be a silver bullet for all cases. But there are definitely business cases that do not work well with this approach.

replies(3): >>19534465 #>>19534843 #>>19535755 #
41. Izkata ◴[] No.19534465{4}[source]
You don't even need rare(ish) occurrences like rule changes to trigger this. Consider recurring events, like "every other Monday at 1pm", and what happens to them at the twice-yearly DST shifts.
replies(1): >>19535057 #
42. l0b0 ◴[] No.19534517{3}[source]
The main reason I want the original time zone is that converting to UTC means information loss. And compensating for that information loss is a PITA, because there is no separate time zone type, so now I have to store a bloody shambles of a UTC timestamp, a string corresponding to a time zone, possibly another string corresponding to the time zone database version (https://codeblog.jonskeet.uk/2019/03/27/storing-utc-is-not-a...), and rely on every single language interfacing with the database treating timestamps and time zones in compatible ways, which is guaranteed not to be the case for the foreseeable future, and far from trivial to do even close to correctly in most languages. To focus on compatibility with the real world rather than performance databases could for example store all three pieces of information, plus possibly an internal representation purely for ordering and other performance-related purposes, and allow the user to fetch the results using any time zone (defaulting to the server time zone) and time zone database version (again defaulting to the server one). The application layer should not have to do these conversions.
43. crote ◴[] No.19534530{3}[source]
And your last sentence is exactly what many people want.

Let's say I'm writing a calendar application. Naturally, the user wants to enter all events in their own timezone, which in my case is Europe/Amsterdam. So I can just convert that to UTC, and store it along with a timezone string? Well, that's going to break horribly when the EU abolishes summer time next year, because all of those UTC stamps will be wrong!

The only sane option here is to store local time, and a string representing the timezone, which is what the name "timestamp with timezone" suggests it is. Yes, I could do the conversion manually, but it would be a lot more convenient to have it in the database directly.

replies(1): >>19535132 #
44. tofflos ◴[] No.19534597{3}[source]
I don't know. I don't necessarily think it's a bad wish but I also feel that the API has its' downsides. My impression is that it has caused a fair amount of confusion among Java developers because it forces a more complicated understanding of time upon people who in many cases aren't interested in writing applications that deal with describing events accurately across centuries or with multiple time zones.

Personally I find the API confusing and arbitrary. Why include a Year class? Why include both OffsetDateTime and ZonedDateTime? What is the difference between Instant and LocalDateTime for my application that doesn't travel across time zones? Why isn't there an Range class with a start and a stop?

I guess I find "just use UTC" much easier to deal with.

replies(1): >>19535546 #
45. tofflos ◴[] No.19534655{3}[source]
You've nailed the problem. Too many layers in the application stack are tampering with time. As the author of the application I now have to understand every layer and all their nuances. All the good intentions of the database authors, driver authors and library authors are making it worse. Just leave my time alone and let me deal with it in the application.
46. gigatexal ◴[] No.19534778[source]
What I like about this article is that it does something subtle yet profound: it uses repetition to solidify a concept to the reader. For example the referencing of wall time in the first few paragraphs lowers the load for the reader to refer back to previous paragraphs to reconnect with a newly introduced concept (walltime in this context was new to me as of reading this article). I think this is a huge win as it doesn’t break the reading timeline and it’s a subtle yet huge showing of attention to detail. Much kudos to the author.
replies(1): >>19534995 #
47. welder ◴[] No.19534843{4}[source]
Yes, I forgot to make an exception for scheduling future local times, so use utc except for future local datetimes.
replies(1): >>19534966 #
48. jasode ◴[] No.19534966{5}[source]
I didn't downvote any of your comments but "In this case [...] store the time zone string" -- seems to contradict your earlier advice of "Always use UTC without time zone".

The issue is that there are 2 types of future times:

(1) scientific-absolute times -- like future solar eclipses; you can normalize these types of times to a future UTC with no ambiguity (other than esoteric adjustments such as TAI leap seconds)

(2) cultural-relative-wall-clock times -- like future recurring appointments, future New Year's midnight celebrations, etc.; These can't be normalized to UTC perfectly because one doesn't know what the exact future UTC time will be due to unknown government changes to DST rules or TZ boundaries. If you simply normalize to UTC, you will throw away information and it will cause bugs. Your eventual addendum of "store the time zone string" basically acknowledges this edge case for future datetimes.

49. fphilipe ◴[] No.19534995[source]
Thanks, much appreciated.

At least for me, writing an article like that takes time and lots of thought (plus forces you to grok the stuff deeply). I wish I could write more often (or at least, carve out the time and energy for it), haven’t written much since.

replies(2): >>19535197 #>>19535212 #
50. welder ◴[] No.19535029{3}[source]
Oh, yea I forgot scheduling. All other cases you should use utc.
51. wiredfool ◴[] No.19535057{5}[source]
For an app that I'm working on, we're storing Dates/Times canonically as a a recurrence rule of Dates, plus a Time/several start times in event local time zone.

Internally for indexing purposes, we expand the dates + time into a series of datetimes at UTC, at which point we can easily search on them.

52. GordonS ◴[] No.19535060{3}[source]
Totally agree, it's one of the reasons I came across and posted this article - I too had assumed the timestampz type stored the time zone. I'm using Postgres with Ngpsql with supports using .NET DateTimeOffset types as parameters, and as you say depending on your DB config everything can actually look as you expect in the DB...
53. samgaw ◴[] No.19535066[source]
It sounds like what you're asking for falls under a 'composite type'. So what you could do is something along the lines of:

  CREATE TYPE datetimetz AS (
    dt  timestamptz,
    tz  varchar
  );
Then in your table def you can call your new type to avoid multiple columns:

  CREATE TABLE example (
    id          integer,
    created_at  datetimetz
  );
54. GordonS ◴[] No.19535099{4}[source]
AFAIK, it stores the offset
55. welder ◴[] No.19535132{4}[source]
Yes, you're right. Scheduling is one reason to store timestamps with timezone, to prevent information loss.
replies(1): >>19535989 #
56. bradleyjg ◴[] No.19535194{3}[source]
In case anyone isn't aware, Stephen Colebourne was the specification lead for JSR-310 which eventually became java.time.* He had previously written Joda-Time an open source library which prior to the release of Java 8 was the de facto date/time library for Java for about a decade.

He'd be someone I want on any new RFC effort.

57. gigatexal ◴[] No.19535197{3}[source]
It reads like a senior dev helping a junior where the senior dev understands the crap he or she went through to get to where they are so they break things down and use things like repetition until something sticks to help the junior dev grok it until there’s an aha moment. Keep it up!
58. manigandham ◴[] No.19535210[source]
As many other comments state, the most practical advice is to just use 'timestamp' and ensure that all applications always use UTC. The only exception is user-facing appointments that are scheduled in that user's timezone which can't be converted to UTC safely. In this case, store the timezone in a separate column specifically for those rows/entities that need it.

In neither case is timezonetz useful or necessary, and will actually end up causing more bugs by converting timezones to/from the database instead of just returning the value and letting your application deal with it.

replies(2): >>19535260 #>>19535778 #
59. yosamino ◴[] No.19535212{3}[source]
And thank you so much for the chart[0] mapping absolute to wall time. I always have a hard time visualizing DST changes ( usually I am mentally "sliding" bars of time against each other and get confused), but that one just gave me a click.

[0] https://phili.pe/posts/timestamps-and-time-zones-in-postgres...

60. mmaurizi ◴[] No.19535260[source]
Even better for future dates than storing the timezone is storing the geographic point the appointment is at and determining the timezone when necessary - as timezone boundaries can change.
61. jeltz ◴[] No.19535277{4}[source]
No, zzzeek is wrong in this case. The general advice from all PostgreSQL contributors and consultants I have heard talking on the subject is to use timestamptz by default and only use timestamp in a few select cases. And PostgreSQL itself uses timestamptz in its system views (the pg_stat_* views for example).

Consider the following queries, both ran on a machine where everything is in UTC (I always configure my production machines in UTC):

    SELECT date(('2019-01-01T23:00:00Z'::timestamptz) AT TIME ZONE 'Europe/Berlin');
        date    
    ------------
     2019-01-02
   (1 row)


    SELECT date(('2019-01-01T23:00:00Z'::timestamp) AT TIME ZONE 'Europe/Berlin');
        date    
    ------------
     2019-01-01
    (1 row)
The first one is likely what you want. When would you write a query like this? For example when you have a non-UTC business day and want to group sales by business day. Doing that kind of aggregation in the application is ugly, error prone and removes many of the benefits of having an SQL database.
replies(2): >>19535552 #>>19535672 #
62. twic ◴[] No.19535306{3}[source]
FWIW, i downvoted you purely because of your edit. If you're right, explain carefully why, don't appeal to your own authority.
63. jeltz ◴[] No.19535382{3}[source]
I downnvoted because it is dangerous advice. If you use timestamp in PostgreSQL rather than timestamptz you will in run into even more time related bugs due to misusing PostgreSQL's data types, for example it will make AT TIME ZONE counterinutitve.
64. gsich ◴[] No.19535401[source]
Why do you call it "duration-time"? It seems obvious that for an event that lasted 5.43s for example, (a "duration", better wording) that it's not counted with leap-seconds.
replies(1): >>19535650 #
65. twic ◴[] No.19535410{3}[source]
Am i right in thinking that in Java 8 terms:

SQL timestamp == java.time.LocalDateTime

SQL timestampz == java.time.Instant, but converted into java.time.OffsetDateTime on read

?

That is, neither of them actually stores a timezone. But timezone is a date and time that needs to be interpreted in the context of some separately-supplied timezone, whereas timezonetz is an actual instant in time?

IME, it's rare to need LocalDateTime. I work in finance. I often need LocalTime, to specify things like "the exchange opens at 07:30 every day", where the timezone is implicitly that of the exchange, and if the exchange moved (lol Brexit), its timezone might change even if its opening time didn't. I also often need LocalDate, mostly when storing or retrieving data in archives organised by date, because we tend to use the local date for that, so that all the data for one session of an exchange, from open to close, is on one date, wherever in the world it is [1]. But i very rarely need LocalDateTime. Perhaps if i worked in calendaring instead, i would use it all the time.

It's a shame that SQL doesn't have a ZonedDateTime or OffsetDateTime, but as others have pointed out, you can just store a timestamp along with a timezone or offset.

[1] Actually, it's worse than this, in that we use the trade date, which is sometimes not the same as the local date - for example, on the Chicago Mercantile Exchange, the market for the E-mini S&P 500 future opens at 17:00 and closes at 16:00 the next day [2]. But that entire session is considered to have a trade date of the calendar date it closes on. In the olden days, the market would open at 07:00 and close at 16:00 (so you had an hour before the banks closed to settle any cash movements, or possibly because traders had very big lunches in those days). Trade date was calendar date. But then the 21st century came along, and they decided to open all day. But because there was so much crucial stuff that happened at the close of trading, they did it by moving the open back to the previous evening, rather than moving the close forward.

[2] https://www.cmegroup.com/trading-hours.html#equityIndex

replies(1): >>19536361 #
66. jeltz ◴[] No.19535461{5}[source]
But also totally pointless since PostgreSQL's timestamptz type also uses 64 bits.
67. jeltz ◴[] No.19535478[source]
It is necessary if you need a higher resolution than PostgreSQL supports unless you find some extension with a custom type, but using it to save space is rarely worth it because then you make the database harder to query and risk running into the 2038 problem just to save 4 bytes per timestamp.
replies(1): >>19535767 #
68. OliverJones ◴[] No.19535511{3}[source]
Righto. Solicit a time-zone preference from each user, using the zoneinfo naming system ('Asia/Kolkata' or 'America/Indiana/Vincennes') Then accept times from, and render times to, that user in their local time. DBMS timezone settings do a good job of that. When users decline to give a preference, show them UTC until they get tired of it and give you a preference.

Finally, boycott Oracle's table server. It's horrible in this respect.

69. twic ◴[] No.19535546{4}[source]
> My impression is that it has caused a fair amount of confusion among Java developers because it forces a more complicated understanding of time upon people who in many cases aren't interested in writing applications that deal with describing events accurately across centuries or with multiple time zones.

I think this is spot on, and is the reason why it's so hard to design time libraries, particularly in language standard libraries. How do you allow all users to express what they need, while not requiring any users to express what they do not need?

Bear in mind that Joda/java.time are still oversimplifying, because they take UTC as their ground truth, which means they can't account for leap seconds. A really thorough time library would take TAI as the ground truth, then have a layer which knew about leap seconds to convert to UTC [1], then a layer which knew about timezones to convert to local times. Not knowing about leap seconds means you can't calculate the time between two Instants accurately - if there is a leap second in the interval, the time will be off by one. However, since most people don't mind about gaining or losing a second every now and then, UTC is a decent tradeoff.

> Personally I find the API confusing and arbitrary. Why include a Year class?

So you can't accidentally pass the year and month, which would otherwise both be integers, the wrong way round to a method which takes those, as i did earlier this week.

> Why include both OffsetDateTime and ZonedDateTime?

There's a time format which looks like '1918-11-11T11:00+01:00'. You can parse that to an OffsetDateTime, but you don't have enough information to parse it to a ZonedDateTime. Was that Paris winter time, or Lagos standard time? If you wanted to find the exact six-month anniversary of that moment, you would need to know, because it would be at 11:00 local time [2], which could be 1919-05-11T11:00+02:00 in Paris but 1919-05-11T11:00+01:00 in Lagos, and those are different moments in time.

> What is the difference between Instant and LocalDateTime for my application that doesn't travel across time zones?

If your application, or one of its users, resides in one of the timezones which have summer and winter times, then you still have two offsets to think about. Every Instant identifies a unique moment in time for you (up to leap seconds!), but a LocalDateTime does not, because there is one hour a year of LocalDateTime values which identify two different moments an hour apart, and one hour a year of values which don't identify anything at all.

[1] inb4 someone goes off about relativity

[2] because everyone knows that the armistice was signed at 11 o'clock on the 11th day of the 11th month, so if you rock up and say "bonjour mes amis, le cérémonie de l'anniversaire c'est à 10h" you will get bayonetted

70. zzzeek ◴[] No.19535552{5}[source]
The blog post and this comment are oriented towards direct commandline use of PostgreSQL, and to the extent you want to use the database as your primary front facing programming API, you need all the conveniences of opinionated types and conversion functions. But when the database is in its more traditional role as persistence for a large layer of business and presentation logic running in a front facing process, having a stricter model with less chance of implicit decisions made by the database itself is better. I think that's part of the disconnect here why some people are downmodding like crazy while others are agreeing. People are coming from two different perspectives, perhaps data scientists vs backend engineers.
replies(1): >>19542466 #
71. msla ◴[] No.19535650{3}[source]
> Why do you call it "duration-time"?

Because it has units of time (seconds, often) and it's done using time-stamps, as in begin-time and end-time.

> It seems obvious that for an event that lasted 5.43s for example, (a "duration", better wording) that it's not counted with leap-seconds.

Obvious until you try to get someone to understand why their time-stamps can't be UTC if they're going to be used for duration-time instead of wallclock-time.

Obvious until you try to get a library or language implementer to understand the UTC isn't Universal.

replies(1): >>19535764 #
72. zzzeek ◴[] No.19535665{3}[source]
You would still not be using "TIMESTAMP WITH TIME ZONE", and you would not use a TIMESTAMP of any kind - for a schedule, it's clearest if you have three columns, for date, scheduled time, and the desired timezone in which the event is scheduled. That removes any ambiguity that when one needs to deal with these times for date arithmetic, as in the extremely common case of sorting them, the developer is forced to coerce the three values into ad-hoc UTC or TZ-aware timestamp objects in the application in order to sort them.

More realistically, for an application that is storing only near future dates within a limited geographic range, you can probably get away with using UTC for storage, as that's what we did at MLB anyway, but today I would likely not give this advice.

73. zzzeek ◴[] No.19535672{5}[source]
I don't use the "AT TIME ZONE" operator. The date I'd be feeding into my query is a UTC timestamp that I converted early in my Python application (Edited for clarity). Similarly, I get UTC datetimes from the database into my Python application and handle display logic in the application, not in the database. (and of course we are not talking about future timestamps which neither is this blog post).

edit edit: oh, you mean you want PG to actually show you groupings of business days. Well sure, that is a highly specialized kind of query that has to know where holidays are and such depending on what you are trying to do (if you need to count business days, perhaps). If I were writing such a query, I'd have to be very careful regardless so I'd ensure the appropriate casts are used against my source timestamps so that they are properly interpreted by PG's calendar-aware functions as well as how to determine what's a business day beyond simple weekend days.

Using PG's calendaring functions is nice but in reality this is more of a data analyst feature, which is what PG spends a lot of time advertising. Use timestamptz all you want in data analyst queries but in your CREATE TABLE use timestamp.

replies(2): >>19535756 #>>19570902 #
74. zzzeek ◴[] No.19535755{4}[source]
> The potential for timezone rule changes mean that future events specified in local timezones are fundamentally ambiguous with regard to any absolute timeframe. Thus, normalization of such events to UTC can result in errors.

Ok but you still dont use TIMESTAMP WITH TIME ZONE to store those either. For future events where you want to protect against time zone changes, you store the intended time which is what this thread is calling "wall clock time". I think you shouldn't use a timestamp object for such a time because it's not a timestamp at all, it's a human description of something that has a moving linkage to a timestamp. However, this makes being able to search and filter for these events more challenging if the database lacks functions to coerce these values into current UTC timestamps, or if you are trying to optimize for indexes that don't make use of these functions.

> Consider TicketMaster. They have a database of future events around the globe, sometimes years in advance.

Most of my datetime experience is from working at MLB.com with Ticketmaster servers in the early 2000s. We used Oracle and we stored the next season's worth of baseball schedules in straight UTC, in addition to storing the original "wall clock plus timezone" version of that time, after our first year online where we were stupidly storing in Oracle's default timezone which was of course EST, and on October 30th at 2 AM the whole site broke because in the perspective of some of our database queries time had moved one hour backwards.

It was not an option to store only "wall clock time" here because we needed to be able to search and sort these events in the database and Oracle did not have the calendaring / indexing capabilities to make it feasible to be coercing from wall clock to UTC on the database side. We were not too worried about timezones being changed within the US/Canada within the next year, as in the rare occurrence that this happened, we simply ran an UPDATE for the UTC timestamps in question based on their desired timezone. I notice the blog post we are talking about here makes no mention of future dates or the challenges in storing them accurately while still being able to query them efficiently; for future dates we had to store both the UTC and the wall clock data.

> Normalizing to UTC makes sense in so many cases that it can seem to be a silver bullet for all cases.

my disagreement with this blog post was first triggered in that it is an "You should always do X" kind of post. The word "future" doesn't even appear in the text of this post nor do the words "index" or "sort", which is a funny thing for an article that seems to assure the inexperienced reader that this is all you need to know about timestamps in databases.

75. colanderman ◴[] No.19535756{6}[source]
The parent's example wasn't about display logic, it was about aggregation. I trust you don't do that at the application layer.
replies(1): >>19535788 #
76. gsich ◴[] No.19535764{4}[source]
>Because it has units of time (seconds, often) and it's done using time-stamps, as in begin-time and end-time.

If you are recording a duration you have another unit. If you use begin-time and end-time you are calculating a duration, which is different of course.

If you calculate durations, then you might consider leap seconds, if they are important for your measurement. If not, then you just ignore them. I don't see a problem here.

Besides, leap-seconds are known, so removing them from a calculated duration is not a problem.

77. minusf ◴[] No.19535767{3}[source]
64bit Unix timestamp already exists. also it's a number of seconds, so how is it higher resolution than the postgresql timestamp?
replies(1): >>19542364 #
78. taffer ◴[] No.19535778[source]
I think you misunderstood how timestamps work in PostgreSQL. Timestamptz is always saved as UTC. Only in exceptional cases where you need to store something in local time, such as a scheduled event that is not in UTC, you should use the timestamp WITHOUT TZ. This is also what well-known PostgreSQL developers recommend: https://tapoueh.org/blog/2018/04/postgresql-data-types-date-...
replies(1): >>19539204 #
79. ◴[] No.19535788{7}[source]
80. conradfr ◴[] No.19535839[source]
I guess you're right.

I have a side project that stores radios' schedules and started with timestamptz (while storing in UTC anyway) and thought I did the right thing.

Then I converted part of the project to Elixir and discovered that Ecto use timestamp by default. I thought it was odd but looked into the reasoning and discovered the misnaming of the "with timezone" part and that José Valim was right as usual ;)

What I ended up using in my select queries is something along the lines of :

> s.date_time_start at time zone 'UTC' at time zone 'Europe/Paris' as start_at

Unrelated but at work Friday (while reading this article, as fate would have it) I had to debug a erroneous time displayed to some users by text. I found one line in the service that forgot to specify the timezone while formatting the datetime. It used to work so what changed ? Well our users are increasingly international and we recently move those cron jobs from one server in Ireland to multiple AWS zones so what seemed to work for years before showed up now as a bug. I also know they had a hectic debug day six months ago during the last DST event. In conclusion, dates are hard.

81. zeroimpl ◴[] No.19535989{5}[source]
This is exactly why I said in my original post that I sometimes want this - in many cases I don't, but whenever a user has explicitly input the data I'd like to easily be able to return it in the same timezone. I can (and do) store two columns in the database, but I'd prefer the database and JDBC interface natively support this as a single data type.

Right now, supposing you have two columns, EventTime TIMESTAMPTZ and EventTimeZone TEXT, you'd have to do something like the following in your query to get the data returned to the application properly:

    SELECT (EventTime AT TIME ZONE EventTimeZone) || ' ' || EventTimeZone
82. niftich ◴[] No.19536361{4}[source]
PostgreSQL 'TIMESTAMP', for nonextreme values, is the same idea as java.time.Instant -- an absolute point on the timeline stored internally as a number from some epoch -- but whose default textual interface accepts and emits java.time.LocalDateTime values that correspond, but aren't tagged with the UTC zone.

When you insert into a 'TIMESTAMP', it's as if it did input LocalDateTime(x).atZone(UTC).toInstant(), and saved that; and when you read it, it's as if it does storedInstant.atZone(UTC).toLocalDateTime().

A 'TIMESTAMPTZ' is stored the same way, but on string output is converted to the session timezone, and an underspecified input is enriched with the session timezone.

When you insert into a 'TIMESTAMPTZ' it'd take the input OffsetDateTime or ZonedDateTime, or enrich the input with the session zone until one were obtained, call .toInstant(), and save it. On read, it'd do storedInstant.atZone(sessionZone) to obtain a ZonedDateTime, and its default output format only includes the offset.

LocalDateTime is useful for calendaring, and as an intermediate step when joining a LocalDate and a LocalTime and working towards a Zoned or Instant.

83. manigandham ◴[] No.19539204{3}[source]
It's not about the storage of the value, it's the fact that PG will potentially change the value you provide compared to what's stored when using 'timestamptz'. If you use UTC and send it as UTC and request it as UTC then everything is fine but there are plenty of edge cases when using ORMs, handwritten SQL, different server timezones, or type conversions that can lead to issues on the way in and out of the database.

Having the database store anything other than what you provide is always a source of bugs.

84. petereisentraut ◴[] No.19541872[source]
The problem with this is that time zone definitions change, both in the future because of political and administrative changes, as well as in the past, when mistakes are corrected. So a datum like "1st of May 2022 6pm in London" is not a fixed time. And therefore, you can't do any useful computations with this, like what is the time interval between then and the current time, or is this timestamp before, the same, or after, "1st of May 2022 6pm in Madrid". You can't even compare timestamps in the same time zone, because on some days you don't know whether 2am != 3am. Without the ability to do computations or even basic comparisons, such a type wouldn't be very useful in a database.
85. jeltz ◴[] No.19542364{4}[source]
If you need higher than micro second resolution or if you are working with astronomical timescales. These are not common cases but some people have them.
86. jeltz ◴[] No.19542466{6}[source]
My background is a backend engineer and I would never use SQL for presentation (outside the occasional custom reporting query), but that does not mean you should not use timestamptz. You should just use a database driver which treats a timstamptz as an absolute point in time and throws away the time zone after parsing. By using timestamptz with a good PostgreSQL driver you ensure that PostgreSQL's own timefunctions work as expected while also getting sane data types from your queries, data types where time zone should be handled in the presentation layers.

In the industries I have worked in aggregating events per date (according to some time zone) has been a common operation and that would have been error prone with the timestamp type.

replies(1): >>19542916 #
87. zzzeek ◴[] No.19542916{7}[source]
I think there are mistakes to be made with the use of either type. On the Python side using "timestamp with time zone" means we're going to get back timezone-aware datetime objects, which are perfectly fine but they need to be handled correctly on the application side. I would blame the implicit decisions made by PostgreSQL of operators like "AT TIME ZONE", choosing to make the same operator do two different things based on the flavor of timestamp as well as returning the opposite type of type from that given that makes the whole area more difficult than it has to be. On MySQL, the convert_tz() function is clear and unambiguous, you pass it the timestamp, and the "from" and "to" timezones. The difference is that PostgreSQL is trying to be an "Object relational database" with a lot of SQL language complexity whereas MySQL is just trying to be a persistence engine.
88. fphilipe ◴[] No.19551427[source]
> > However, when you come across timestamp, you can’t possibly know in what time zone the timestamp is just by looking at it.

> yes I do, it's in UTC, because I assume the application is written to correctly handle datetimes.

> > It is merely a wall time. Maybe it’s UTC,

> no, it's UTC.

No, you don't know that if you're looking at the database without knowledge of the app's conventions.

I don't understand your argument about ambiguity and implicit conversions. Yes, if your session or DB is configured to something other than UTC, it will convert the timestamptz values in your output to that time zone's offset, but it'll still include the offset.

Thus, a value "2019-04-01 00:00:00+00" that is output when using timestamptz with UTC (compared to "2019-04-01 00:00:00" when using timestamp), is output as "2019-03-31 19:00:00-05" when using EST. Both values represent the same moment in time, obviously, and are equivalent. Assuming your app and ORM convert everything to UTC, that shouldn't matter at all.

89. ants_a ◴[] No.19570902{6}[source]
I think the described use of timestamp is driven by shortcomings of Pythons datetime library not having proper timezone handling. I don't think an approach where database design is driven by the opinions of one single application is good. Timestamptz datatype is universally understood to store an unambiguous point in time, which is usually what is intended for recording events. If the output formatting done by the database is a problem for the application, set client_timezone configuration to UTC.