When is Multiple Comparison Correction necessary?

783 views Asked by At

I am not very experienced in hypothesis testing and have a problem understanding when Multiple Comparison Problem occurs?

As I Understood Multiple Comparison Problem occurs when one tries to perform several statistical tests from a single database. Therefore, in order to draw right conclusions, the significance level should be adjusted. (Am I right?)

In my situation, I have a database and I perform several t-tests on separate parts of the database. In other words, the data of each test is completely distinct from the other test, while all data belong to one database. So, in principle multiple comparison problem should not exist in my tests, is it right?

Thanks in Advance.

1

There are 1 answers

0
yulunz On

Multiple hypothesis testing tries to simultaneously perform hypothesis testing on all of your variables. You would reject your null when some of these hypotheses on a single variable fails, as oppose to failure of all of your hypotheses (a very unlikely event).

When you use, say, 95% confidence interval on all of your variables for multiple test, you are actually testing against the second above scenario of all of your hypotheses fail. This is not what you want for multiple test. Therefore, the confidence interval for each of your variables shall be loosen a bit to accommodate the case where only some of your hypotheses fail.

Therefore it really depends on your goal:

  • If your are testing against a scenario involving multiple variables (you want to draw one overall conclusion), then yes, this is multiple testing and confidence intervals for each variable should be adjusted.
  • If you want a test for each variable (you want to draw one conclusion for each variable), then each of your tests is a single variable test and you do not need to adjust.