So: 2 hours -> 4 days, 1 week -> 2 months, etc.
(I'm not sure where this turned up, but it's a long time ago, going on three decades.)
The other option is to carefully track tasks, relevant dimensions, estimates, and actual performance, and see if there's any prediction modelling which can be derived from that. Problem is that this a classic instance of modelling in which the model itself affects the domain (as with economics and sociology, contrasted with astronomy and meterology where this isn't the case), such that estimates incorporating past performance data will incorporate the fact that the prediction model is being used.
I've certainly seen some environments — plural — where a task that should take 1 hour actually takes 2 days, and one that should take 2 days takes 4 weeks.
1. If we’re talking about bamboozling people who are either ignorant or willfully obtuse, excess precision is a stupid but useful tool for getting a realistic multiple of 3x-4x past them.
2. If the target audience understands excess precision, the excess precision is a nice little in-joke to flatter them and acknowledge their cleverness.
3. The absurdity of the precision illustrates tha very imprecision of the estimate.
Note that this also comes out about the same if you're using months (12 months -> 24 years (2.4 decades)) and (at least within a factor of two-ish) weeks (52 weeks -> 104 months (0.867 decades).
I'm not claiming this is accurate, I'm stating that it's a heuristic I'm familiar with.
This may have been in Arthur Bloch's Murphy's Law and Other Reasons Things Go Wrong, though I'm not finding it in comparable collections. Possibly from project planning literature I was reading in the 1990s (DeMarco & Lister, Brooks, Boehm, McConnell, etc.).
I don't know. Going from 8 hours => 16 days seems like quite the markup.