Suppose you are a project manager. You can estimate an effort in days for specific task for specific developer. After performing estimation you obtain some min and max values.
After this you delegate a task to developer. Actually you also set up deadline.
Which estimation is better to use when set up deadline: min or max?
As I see min estimation can result in stress for developer, max estimation can result in using all the time which is allocated to developer even if task can be complete faster (so called Student syndrome). Which other pros and cons of two approaches?
EDIT:
Small clarification: I speak about setting up deadlines for subordinates when delegating the task, NOT for reporting to my boss.
EDIT:
To add one more clarification: I can keep in mind my real estimation, provide to boss slightly larger estimation, to subordinates - slightly smaller. And this questions touches the following thing: is it good idea to provide to developer underestimation to make him working harder?
Use neither min nor max but something in between.
Erring on the side of overestimation is better. It has much nicer cost behavior in the long term.
To overcome the stress due to underestimation, people may take shortcuts that are not beneficial in the long term. For example, taking extra technical debt thast has to be paid back eventually, and it comes back with an interest. The costs grow exponentially.
The extra cost from inefficiency due to student's syndrome behaves linearly.
Estimates and targets are different. You (or your managers and customers) set the targets you need to achieve. Estimates tell you how likely you are to meet those targets. Deadline is one sort of target. The deadline you choose depends on what kind of confidence level (risk of not meeting the deadline) you are willing to accept. P50 (0.5 probability of meeting the deadline) is commonplace. Sometimes you may want to schedule with P80 or some other confidence level. Note that the probability curve is a long-tailed one and the more confidence you want, the longer you will need to allocate time for the project.
Overall, I wouldn't spend too much time tracking individual tasks. With P50 targets half of them will be late in any case. What matters most is how the aggregate behaves. When composing individual tasks estimates into an aggregate, neither min or max is sensible. It's extremely unlikely that either all tasks complete with minimum time (most likely something like P10 time) or maximum time (e.g. P90 time): for n P10/P90 tasks the probability is 0.1^n.
PERT has some techniques for coming up with reasonable task duration probability distributions and aggregating them to larger wholes. I won't go into the math here. Here's some pointer for further reading:
Steve McConnell: Software Estimation - Demystifying the Black Art. It's quite readable and pragmatic but at least the 1st edition I have has some quirks in its math and otherwise.
Richard D. Stutzke: Estimating Software-Intensive Systems - Projects, Products and Processes. It's a little more academic, harder read but for example explains the math better.