I am in complete agreement with Dr. Allison's summary of statistics.

In response to the question below I would add that the p value is the

probability (assuming all the statistical assumptions are valid) of

obtaining the observed outcome by chance alone. This probability is

compared to the alpha level to determine statistical significance. The

reason that there is no such thing as "highly significant" or the even

more annoying "trending towards significance" is because the

experimental p value is the outcome of only one experiment. If the p

value of that experiment is sufficiently rare (less than alpha) we have

confidence in concluding that the observed phenomenon was _probably_ not

due to chance (not a Type I error). Our level of confidence is reflected

by the alpha level. In effect, alpha is a predetermined probability of

the correctness of our conclusions. If alpha is .05 then we can state

with confidence that our conclusions will be wrong 5 times out of 100,

we just don't know if this is one of those times. Since alpha is our

pre-established cutoff, one p is as good as another. Remember, you

cannot know if the null hypothesis or research hypothesis is correct.

You can "reject the null hypothesis" (palpha),

which is not the same as concluding that the null hypothesis is correct,

but rather that the null hypothesis (random chance) is a good at

explaining the data as the research hypothesis and that you therefore

have no reason to prefer the research hypothesis.

What we really want to do, in our own work and in our evaluations of the

works of others, is to justify alpha and not blindly insist in the .05

level. This level is probably not the most appropriate alpha for every

experiment and every data set.

Thomas M. Greiner, Ph.D.

Assistant Professor of Anatomy

Department of Health Professions

University of Wisconsin - La Crosse

4054 Health Science Center

1725 State Street

La Crosse, WI 54601-3742

Phone: (608) 785-8476

Fax: (608) 785-8460

Email: greiner.thom@uwlax.edu

-----Original Message-----

From: * Biomechanics and Movement Science listserver

[mailto:BIOMCH-L@NIC.SURFNET.NL] On Behalf Of Bryan Kirking

Sent: Tuesday, January 25, 2005 4:46 PM

To: BIOMCH-L@NIC.SURFNET.NL

Subject: Re: [BIOMCH-L] Stats Power. Report Confidence Limits - p values

To comment and question some of Dr. Allison's insight:

>>My understanding of the arbitrary "line in the sand" of 0.05 was

>>originally due to the choice of the original tables (pre computer)

I have heard this too. It was very tedious to calculate probabilities

(pre

Personal Computer) as is done now, so the investigator would pick the

appropriate values to simplify the calculations.

>>The p value reflects the probability of the observed change happening

by

chance.

Isn't this only correct if the null hypothesis is correct (not

rejected?). This is why (as explained to me by statisticians - I won't

claim authority here) it is considered incorrect to differentiate

"significant" from "very significant" from "highly significant"? I

present

this point because of your comment about relating the alpha level to the

seriousness of the outcome.

Bryan Kirking

ProbaSci LLC

tel. 512.218.3900

fax. 512.218.3972

www.probasci.com

bryan@probasci.com

-----------------------------------------------------------------

To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl

For information and archives: http://isb.ri.ccf.org/biomch-l

-----------------------------------------------------------------

-----------------------------------------------------------------

To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl

For information and archives: http://isb.ri.ccf.org/biomch-l

-----------------------------------------------------------------

In response to the question below I would add that the p value is the

probability (assuming all the statistical assumptions are valid) of

obtaining the observed outcome by chance alone. This probability is

compared to the alpha level to determine statistical significance. The

reason that there is no such thing as "highly significant" or the even

more annoying "trending towards significance" is because the

experimental p value is the outcome of only one experiment. If the p

value of that experiment is sufficiently rare (less than alpha) we have

confidence in concluding that the observed phenomenon was _probably_ not

due to chance (not a Type I error). Our level of confidence is reflected

by the alpha level. In effect, alpha is a predetermined probability of

the correctness of our conclusions. If alpha is .05 then we can state

with confidence that our conclusions will be wrong 5 times out of 100,

we just don't know if this is one of those times. Since alpha is our

pre-established cutoff, one p is as good as another. Remember, you

cannot know if the null hypothesis or research hypothesis is correct.

You can "reject the null hypothesis" (palpha),

which is not the same as concluding that the null hypothesis is correct,

but rather that the null hypothesis (random chance) is a good at

explaining the data as the research hypothesis and that you therefore

have no reason to prefer the research hypothesis.

What we really want to do, in our own work and in our evaluations of the

works of others, is to justify alpha and not blindly insist in the .05

level. This level is probably not the most appropriate alpha for every

experiment and every data set.

Thomas M. Greiner, Ph.D.

Assistant Professor of Anatomy

Department of Health Professions

University of Wisconsin - La Crosse

4054 Health Science Center

1725 State Street

La Crosse, WI 54601-3742

Phone: (608) 785-8476

Fax: (608) 785-8460

Email: greiner.thom@uwlax.edu

-----Original Message-----

From: * Biomechanics and Movement Science listserver

[mailto:BIOMCH-L@NIC.SURFNET.NL] On Behalf Of Bryan Kirking

Sent: Tuesday, January 25, 2005 4:46 PM

To: BIOMCH-L@NIC.SURFNET.NL

Subject: Re: [BIOMCH-L] Stats Power. Report Confidence Limits - p values

To comment and question some of Dr. Allison's insight:

>>My understanding of the arbitrary "line in the sand" of 0.05 was

>>originally due to the choice of the original tables (pre computer)

I have heard this too. It was very tedious to calculate probabilities

(pre

Personal Computer) as is done now, so the investigator would pick the

appropriate values to simplify the calculations.

>>The p value reflects the probability of the observed change happening

by

chance.

Isn't this only correct if the null hypothesis is correct (not

rejected?). This is why (as explained to me by statisticians - I won't

claim authority here) it is considered incorrect to differentiate

"significant" from "very significant" from "highly significant"? I

present

this point because of your comment about relating the alpha level to the

seriousness of the outcome.

Bryan Kirking

ProbaSci LLC

tel. 512.218.3900

fax. 512.218.3972

www.probasci.com

bryan@probasci.com

-----------------------------------------------------------------

To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl

For information and archives: http://isb.ri.ccf.org/biomch-l

-----------------------------------------------------------------

-----------------------------------------------------------------

To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl

For information and archives: http://isb.ri.ccf.org/biomch-l

-----------------------------------------------------------------