←back to thread

189 points GordonS | 10 comments | | HN request time: 1.26s | source | bottom
1. robbiep ◴[] No.19533377[source]
I have a question, that was partially addressed in some comments in the last Timezone in Databases article earlier in the week -

What do people think about using Unix time in databases?

replies(4): >>19533555 #>>19533627 #>>19533651 #>>19535478 #
2. paulryanrogers ◴[] No.19533555[source]
Directly using Unix timestamps is unlikely to save much space since most internal representations are likely similar. They're also difficult for mortals to read and change in human-significant ways (like 3 calendar days ago).
3. TheAceOfHearts ◴[] No.19533627[source]
It depends on your use-case. You should also be aware of the Year 2038 problem [0].

What are you trying to achieve?

[0] https://en.wikipedia.org/wiki/Year_2038_problem

replies(1): >>19533747 #
4. niftich ◴[] No.19533651[source]
I feel that storing unix epoch seconds in a >=4-byte fixed length numeric or arbitrary precision numeric column is a reasonable choice if:

1) the record naturally has the semantics of a timestamp that is chiefly used for comparisons between values (e.g. to sort, min, max, subtract to get an approximate interval) and whose conversions to human-meaningful calendrical datetime are not needed or only need to be approximate (given unix timestamps' lack of leap seconds),

AND 2) you've thought about overflow (of both unix time and your column), signed-ness, sentinel and special values, and NULL handling,

AND 3) you're not enthused by the ANSI SQL definitions of TIMESTAMP types or think you, your RDBMS, your database driver, your programming language, your datetime library, or the code's next maintainer might screw something up.

If only the first factor isn't true, use an RFC 3339 UTC timestamp in a string column instead. If only the second factor isn't true, use one of the SQL TIMESTAMP types instead, so you can pick up a better-defined suite of behaviors.

5. robbiep ◴[] No.19533747[source]
Largely timestamping. Accuracy to a high level (leap seconds) is irrelevant. Aware of 2038 problem, assume we will have some fun along the way to there along with the rest of the unix world!
replies(1): >>19534097 #
6. TheAceOfHearts ◴[] No.19534097{3}[source]
Storing Unix timestamps in a 64-bit signed integer field would probably be fine in that situation. The only downsides that come to mind are that you'd have to manually convert the number to an actual date when using it, and that you'd lose easy access to date-related functionality in the database.
replies(1): >>19535461 #
7. jeltz ◴[] No.19535461{4}[source]
But also totally pointless since PostgreSQL's timestamptz type also uses 64 bits.
8. jeltz ◴[] No.19535478[source]
It is necessary if you need a higher resolution than PostgreSQL supports unless you find some extension with a custom type, but using it to save space is rarely worth it because then you make the database harder to query and risk running into the 2038 problem just to save 4 bytes per timestamp.
replies(1): >>19535767 #
9. minusf ◴[] No.19535767[source]
64bit Unix timestamp already exists. also it's a number of seconds, so how is it higher resolution than the postgresql timestamp?
replies(1): >>19542364 #
10. jeltz ◴[] No.19542364{3}[source]
If you need higher than micro second resolution or if you are working with astronomical timescales. These are not common cases but some people have them.