Recognizing that p-value interpretation is a topic that is hotly debated
and further confused by subtle differences in terminology, I'd like to pose
the question:
If "alpha, or type I error" is defined (as best as I know) as the
probability of rejecting the null hypothesis when the null hypothesis is true,
And based on the associated p-value or confidence interval, one rejects the
null hypothesis, making the null untrue(?)
Then doesn't alpha, or type I error become an impossibility (i.e, reject a
true null when the p-value suggests the null is not true). I suspect the
answer comes down to Dr. Greiner's remark that this is only one experiment,
but if we replicate the experiment 100 times wouldn't the same situation be
present? Does type II error (probability of not rejecting a false null)
now become the best measure of confidence (and I use confidence for lack of
a better term)?
As to predefining the alpha level, the issue becomes even more difficult to
me when I consider that most studies I read or perform usually have
multiple comparisons. If one doesn't set overall confidence levels and
therefore individual levels a priori, how do we guarantee that the overall
confidence is maintained? Do we do analyses that "accepts" = 0.026 given
that another variable is p=0.024 and therefore maintains 0.05? To me, this
is a very good reason for keeping with predefined values, as long as those
values are suitable for your application.
-----------------------------------------------------------------
To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
For information and archives: http://isb.ri.ccf.org/biomch-l
-----------------------------------------------------------------
and further confused by subtle differences in terminology, I'd like to pose
the question:
If "alpha, or type I error" is defined (as best as I know) as the
probability of rejecting the null hypothesis when the null hypothesis is true,
And based on the associated p-value or confidence interval, one rejects the
null hypothesis, making the null untrue(?)
Then doesn't alpha, or type I error become an impossibility (i.e, reject a
true null when the p-value suggests the null is not true). I suspect the
answer comes down to Dr. Greiner's remark that this is only one experiment,
but if we replicate the experiment 100 times wouldn't the same situation be
present? Does type II error (probability of not rejecting a false null)
now become the best measure of confidence (and I use confidence for lack of
a better term)?
As to predefining the alpha level, the issue becomes even more difficult to
me when I consider that most studies I read or perform usually have
multiple comparisons. If one doesn't set overall confidence levels and
therefore individual levels a priori, how do we guarantee that the overall
confidence is maintained? Do we do analyses that "accepts" = 0.026 given
that another variable is p=0.024 and therefore maintains 0.05? To me, this
is a very good reason for keeping with predefined values, as long as those
values are suitable for your application.
-----------------------------------------------------------------
To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
For information and archives: http://isb.ri.ccf.org/biomch-l
-----------------------------------------------------------------