Thanks to Gordon for this article, which is definitely worth reading -
especially teh follow-up discussion at:
http://bmj.bmjjournals.com/cgi/eletters/322/7280/226#12283
Chris
Gordon Chalmers wrote:
> An interesting review of the history of the p-value, and its potential
> role today can be found in a BMJ article freely available at:
>
> http://bmj.com/cgi/reprint/322/7280/226
>
> Sifting the evidence-what's wrong with significance tests?
> Jonathan A C Sterne, George Davey Smith
>
> In the introduction it states: "In this paper we consider how the
> practice of significance testing emerged; an arbitrary division of
> results as "significant" or "non-significant" (according to the commonly
> used threshold of P=0.05) was not the intention of the founders of
> statistical inference. P values need to be much smaller than 0.05 before
> they can be considered to provide strong evidence against the null
> hypothesis; this implies that more powerful studies are needed.
> Reporting of medical research should continue to move from the idea that
> results are significant or non-significant to the interpretation of
> findings in the context of the type of study and other available
> evidence."
>
> ************************************************** ******************
> Gordon Chalmers, Ph.D.
> Dept. of Physical Education, Health and Recreation
> Western Washington University
> 516 High St.
> Bellingham, WA, U.S.A.
> 98225-9067
> http://www.ac.wwu.edu/~chalmers/
> Phone: 360-650-3113
> Email: Gordon-dot-Chalmers-at-wwu-dot-edu
> in above email address: replace "-dot-" with "."
> replace "-at-" with "@"
>
> -----Original Message-----
> From: * Biomechanics and Movement Science listserver
> [mailto:BIOMCH-L@NIC.SURFNET.NL] On Behalf Of Bryan Kirking
> Sent: Tuesday, January 25, 2005 2:46 PM
> To: BIOMCH-L@NIC.SURFNET.NL
> Subject: Re: [BIOMCH-L] Stats Power. Report Confidence Limits - p values
>
> To comment and question some of Dr. Allison's insight:
>
> >>My understanding of the arbitrary "line in the sand" of 0.05 was
> >>originally due to the choice of the original tables (pre computer)
>
> I have heard this too. It was very tedious to calculate probabilities
> (pre
> Personal Computer) as is done now, so the investigator would pick the
> appropriate values to simplify the calculations.
>
> >>The p value reflects the probability of the observed change happening
> by
> chance.
>
> Isn't this only correct if the null hypothesis is correct (not
> rejected?). This is why (as explained to me by statisticians - I won't
> claim authority here) it is considered incorrect to differentiate
> "significant" from "very significant" from "highly significant"? I
> present
> this point because of your comment about relating the alpha level to the
> seriousness of the outcome.
>
> Bryan Kirking
> ProbaSci LLC
> tel. 512.218.3900
> fax. 512.218.3972
> www.probasci.com
> bryan@probasci.com
>
> -----------------------------------------------------------------
> To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
> For information and archives: http://isb.ri.ccf.org/biomch-l
> -----------------------------------------------------------------
>
> -----------------------------------------------------------------
> To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
> For information and archives: http://isb.ri.ccf.org/biomch-l
> -----------------------------------------------------------------
--
Dr. Chris Kirtley MD PhD
Associate Professor
Dept. of Biomedical Engineering
Catholic University of America
Washington DC 20064
Alternative email: kirtleymd@yahoo.com
-----------------------------------------------------------------
To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
For information and archives: http://isb.ri.ccf.org/biomch-l
-----------------------------------------------------------------
especially teh follow-up discussion at:
http://bmj.bmjjournals.com/cgi/eletters/322/7280/226#12283
Chris
Gordon Chalmers wrote:
> An interesting review of the history of the p-value, and its potential
> role today can be found in a BMJ article freely available at:
>
> http://bmj.com/cgi/reprint/322/7280/226
>
> Sifting the evidence-what's wrong with significance tests?
> Jonathan A C Sterne, George Davey Smith
>
> In the introduction it states: "In this paper we consider how the
> practice of significance testing emerged; an arbitrary division of
> results as "significant" or "non-significant" (according to the commonly
> used threshold of P=0.05) was not the intention of the founders of
> statistical inference. P values need to be much smaller than 0.05 before
> they can be considered to provide strong evidence against the null
> hypothesis; this implies that more powerful studies are needed.
> Reporting of medical research should continue to move from the idea that
> results are significant or non-significant to the interpretation of
> findings in the context of the type of study and other available
> evidence."
>
> ************************************************** ******************
> Gordon Chalmers, Ph.D.
> Dept. of Physical Education, Health and Recreation
> Western Washington University
> 516 High St.
> Bellingham, WA, U.S.A.
> 98225-9067
> http://www.ac.wwu.edu/~chalmers/
> Phone: 360-650-3113
> Email: Gordon-dot-Chalmers-at-wwu-dot-edu
> in above email address: replace "-dot-" with "."
> replace "-at-" with "@"
>
> -----Original Message-----
> From: * Biomechanics and Movement Science listserver
> [mailto:BIOMCH-L@NIC.SURFNET.NL] On Behalf Of Bryan Kirking
> Sent: Tuesday, January 25, 2005 2:46 PM
> To: BIOMCH-L@NIC.SURFNET.NL
> Subject: Re: [BIOMCH-L] Stats Power. Report Confidence Limits - p values
>
> To comment and question some of Dr. Allison's insight:
>
> >>My understanding of the arbitrary "line in the sand" of 0.05 was
> >>originally due to the choice of the original tables (pre computer)
>
> I have heard this too. It was very tedious to calculate probabilities
> (pre
> Personal Computer) as is done now, so the investigator would pick the
> appropriate values to simplify the calculations.
>
> >>The p value reflects the probability of the observed change happening
> by
> chance.
>
> Isn't this only correct if the null hypothesis is correct (not
> rejected?). This is why (as explained to me by statisticians - I won't
> claim authority here) it is considered incorrect to differentiate
> "significant" from "very significant" from "highly significant"? I
> present
> this point because of your comment about relating the alpha level to the
> seriousness of the outcome.
>
> Bryan Kirking
> ProbaSci LLC
> tel. 512.218.3900
> fax. 512.218.3972
> www.probasci.com
> bryan@probasci.com
>
> -----------------------------------------------------------------
> To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
> For information and archives: http://isb.ri.ccf.org/biomch-l
> -----------------------------------------------------------------
>
> -----------------------------------------------------------------
> To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
> For information and archives: http://isb.ri.ccf.org/biomch-l
> -----------------------------------------------------------------
--
Dr. Chris Kirtley MD PhD
Associate Professor
Dept. of Biomedical Engineering
Catholic University of America
Washington DC 20064
Alternative email: kirtleymd@yahoo.com
-----------------------------------------------------------------
To unsubscribe send SIGNOFF BIOMCH-L to LISTSERV@nic.surfnet.nl
For information and archives: http://isb.ri.ccf.org/biomch-l
-----------------------------------------------------------------