Announcement

Collapse
No announcement yet.

Re: Stats Power. Report Confidence Limits - p values

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Re: Stats Power. Report Confidence Limits - p values

    > Then doesn't alpha, or type I error become an impossibility (i.e, reject a
    > true null when the p-value suggests the null is not true).

    Alpha tells you the probability that the difference that you are seeing is
    due to "pure luck". It has become standard to accept but if we replicate the experiment 100 times wouldn't the same situation
    be
    > present? >
    No is not the same situation, for example if you want to determine if a coin
    is loaded you would have to do the experiment (throwing both the coin in
    question and the "control") enough times so that you could observe a
    statistically significant difference. By throwing them only once you can see
    how you could establish a difference with a high chance of error due to
    chance alone (hence high p value).

    I hope I understood that last one correctly,

    Mauricio


    > -----------------------------------------------------------------
    > To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
    > For information and archives: http://isb.ri.ccf.org/biomch-l
    > -----------------------------------------------------------------
    ----- Original Message -----
    From: "Bryan Kirking"
    To:
    Sent: Wednesday, January 26, 2005 12:43 PM
    Subject: Re: [BIOMCH-L] Stats Power. Report Confidence Limits - p values


    > Recognizing that p-value interpretation is a topic that is hotly debated
    > and further confused by subtle differences in terminology, I'd like to
    pose
    > the question:
    >
    > If "alpha, or type I error" is defined (as best as I know) as the
    > probability of rejecting the null hypothesis when the null hypothesis is
    true,
    >
    > And based on the associated p-value or confidence interval, one rejects
    the
    > null hypothesis, making the null untrue(?)
    >
    > Then doesn't alpha, or type I error become an impossibility (i.e, reject a
    > true null when the p-value suggests the null is not true). I suspect the
    > answer comes down to Dr. Greiner's remark that this is only one
    experiment,
    > but if we replicate the experiment 100 times wouldn't the same situation
    be
    > present? Does type II error (probability of not rejecting a false null)
    > now become the best measure of confidence (and I use confidence for lack
    of
    > a better term)?
    >
    > As to predefining the alpha level, the issue becomes even more difficult
    to
    > me when I consider that most studies I read or perform usually have
    > multiple comparisons. If one doesn't set overall confidence levels and
    > therefore individual levels a priori, how do we guarantee that the overall
    > confidence is maintained? Do we do analyses that "accepts" = 0.026 given
    > that another variable is p=0.024 and therefore maintains 0.05? To me,
    this
    > is a very good reason for keeping with predefined values, as long as those
    > values are suitable for your application.
    >
    > -----------------------------------------------------------------
    > To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
    > For information and archives: http://isb.ri.ccf.org/biomch-l
    > -----------------------------------------------------------------

    -----------------------------------------------------------------
    To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
    For information and archives: http://isb.ri.ccf.org/biomch-l
    -----------------------------------------------------------------
Working...
X