Mauricio Barrero

01-26-2005, 04:30 AM

Sorry I meant to write: This doesn't mean that if you get a p value greater

than 0.05......

This doesn't mean that if you get a p value less than 0.05 the difference

between groups doesn't exist is just that you don't have high enough

confidence to report it.

If a difference exists between groups then with a large enough sample size

you will be able to bring the p value number below 5%.

----- Original Message -----

From: "Bryan Kirking"

To:

Sent: Wednesday, January 26, 2005 12:43 PM

Subject: Re: [BIOMCH-L] Stats Power. Report Confidence Limits - p values

> Recognizing that p-value interpretation is a topic that is hotly debated

> and further confused by subtle differences in terminology, I'd like to

pose

> the question:

>

> If "alpha, or type I error" is defined (as best as I know) as the

> probability of rejecting the null hypothesis when the null hypothesis is

true,

>

> And based on the associated p-value or confidence interval, one rejects

the

> null hypothesis, making the null untrue(?)

>

> Then doesn't alpha, or type I error become an impossibility (i.e, reject a

> true null when the p-value suggests the null is not true). I suspect the

> answer comes down to Dr. Greiner's remark that this is only one

experiment,

> but if we replicate the experiment 100 times wouldn't the same situation

be

> present? Does type II error (probability of not rejecting a false null)

> now become the best measure of confidence (and I use confidence for lack

of

> a better term)?

>

> As to predefining the alpha level, the issue becomes even more difficult

to

> me when I consider that most studies I read or perform usually have

> multiple comparisons. If one doesn't set overall confidence levels and

> therefore individual levels a priori, how do we guarantee that the overall

> confidence is maintained? Do we do analyses that "accepts" = 0.026 given

> that another variable is p=0.024 and therefore maintains 0.05? To me,

this

> is a very good reason for keeping with predefined values, as long as those

> values are suitable for your application.

>

> -----------------------------------------------------------------

> To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl

> For information and archives: http://isb.ri.ccf.org/biomch-l

> -----------------------------------------------------------------

-----------------------------------------------------------------

To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl

For information and archives: http://isb.ri.ccf.org/biomch-l

-----------------------------------------------------------------

than 0.05......

This doesn't mean that if you get a p value less than 0.05 the difference

between groups doesn't exist is just that you don't have high enough

confidence to report it.

If a difference exists between groups then with a large enough sample size

you will be able to bring the p value number below 5%.

----- Original Message -----

From: "Bryan Kirking"

To:

Sent: Wednesday, January 26, 2005 12:43 PM

Subject: Re: [BIOMCH-L] Stats Power. Report Confidence Limits - p values

> Recognizing that p-value interpretation is a topic that is hotly debated

> and further confused by subtle differences in terminology, I'd like to

pose

> the question:

>

> If "alpha, or type I error" is defined (as best as I know) as the

> probability of rejecting the null hypothesis when the null hypothesis is

true,

>

> And based on the associated p-value or confidence interval, one rejects

the

> null hypothesis, making the null untrue(?)

>

> Then doesn't alpha, or type I error become an impossibility (i.e, reject a

> true null when the p-value suggests the null is not true). I suspect the

> answer comes down to Dr. Greiner's remark that this is only one

experiment,

> but if we replicate the experiment 100 times wouldn't the same situation

be

> present? Does type II error (probability of not rejecting a false null)

> now become the best measure of confidence (and I use confidence for lack

of

> a better term)?

>

> As to predefining the alpha level, the issue becomes even more difficult

to

> me when I consider that most studies I read or perform usually have

> multiple comparisons. If one doesn't set overall confidence levels and

> therefore individual levels a priori, how do we guarantee that the overall

> confidence is maintained? Do we do analyses that "accepts" = 0.026 given

> that another variable is p=0.024 and therefore maintains 0.05? To me,

this

> is a very good reason for keeping with predefined values, as long as those

> values are suitable for your application.

>

> -----------------------------------------------------------------

> To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl

> For information and archives: http://isb.ri.ccf.org/biomch-l

> -----------------------------------------------------------------

-----------------------------------------------------------------

To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl

For information and archives: http://isb.ri.ccf.org/biomch-l

-----------------------------------------------------------------