Announcement

Collapse
No announcement yet.

Re: Stats Power. Report Confidence Limits - p values

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Re: Stats Power. Report Confidence Limits - p values

    Dear Bryan,

    I think the problem below lies in your assumption that
    the null hypothesis is untrue merely because you have
    rejected it.
    "one rejects the null hypothesis, making the null
    untrue(?)"

    As everyone else so rightly said, there is still a
    chance that it is true and that you have therefore
    made a type I error. That chance is alpha.

    The fashion seems increasingly to be to do away with
    cut offs and quote the specific confidence level for
    you results. This is probably wise under the
    circumstances but I think Chris Kirtley is right,
    maybe it's high time we started considering some more
    modern options. I look forward to hearing from someone
    who can tell us how to do the simulations. Perhaps we
    could even consider a workshop at the next appropriate
    conference?

    Sian Jenkins Lawson



    --- Bryan Kirking wrote:

    > If "alpha, or type I error" is defined (as best as I
    > know) as the
    > probability of rejecting the null hypothesis when
    > the null hypothesis is true,
    > And based on the associated p-value or confidence
    > interval, one rejects the
    > null hypothesis, making the null untrue(?)
    >
    > Then doesn't alpha, or type I error become an
    > impossibility (i.e, reject a
    > true null when the p-value suggests the null is not
    > true). I suspect the
    > answer comes down to Dr. Greiner's remark that this
    > is only one experiment,
    > but if we replicate the experiment 100 times
    > wouldn't the same situation be
    > present? Does type II error (probability of not
    > rejecting a false null)
    > now become the best measure of confidence (and I use
    > confidence for lack of
    > a better term)?
    >
    > As to predefining the alpha level, the issue becomes
    > even more difficult to
    > me when I consider that most studies I read or
    > perform usually have
    > multiple comparisons. If one doesn't set overall
    > confidence levels and
    > therefore individual levels a priori, how do we
    > guarantee that the overall
    > confidence is maintained? Do we do analyses that
    > "accepts" = 0.026 given
    > that another variable is p=0.024 and therefore
    > maintains 0.05? To me, this
    > is a very good reason for keeping with predefined
    > values, as long as those
    > values are suitable for your application.
    >
    >
    -----------------------------------------------------------------
    > To unsubscribe send SIGNOFF BIOMCH-L to
    > LISTSERV@nic.surfnet.nl
    > For information and archives:
    > http://isb.ri.ccf.org/biomch-l
    >
    -----------------------------------------------------------------
    >
    >

    -----------------------------------------------------------------
    To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
    For information and archives: http://isb.ri.ccf.org/biomch-l
    -----------------------------------------------------------------
Working...
X