But, as adjacent test results in parameter space are strongly correlated, intuitively we haven't conducted 21 independent "tests" here - maybe closer to 5? Is there a way to quantify the effective number of independent tests in this manner? instead of a 1% confidence level require a 1/21 = 0.048% confidence level to attain significance). Applying Bonferroni correction to these tests would probably render all findings insignificant (i.e. The graph above shows 21 different tests. ![]() As results for similar values of the parameter are similar, this can be viewed as a calibration of the model rather than data dredging folly. Sometimes multiple tests are conducted of the same hypothesis, using the same model each time but with some model parameter changed.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |