Stephen Page

11-12-1998, 01:35 AM

Dr. Hooper:

In their book, Psychometric Theory, Nunnally and Bernstein (1994) offer

several good alternative methods for measuring reliability...see if their

suggestions help.

Steve Page, Ph.D.

The Kessler Institute for Rehabilitation

Dr David Michael Hooper wrote:

> Hello,

>

> I am posting this on behalf of some students of mine. They have

> conducted a reliability study in which three raters tested and

> re-tested a group of twelve subjects on two different days.

> Currently, they are attempting to calculate an intraclass correlation

> coefficient (ICC) as described by Shrout and Fleiss (1979) and

> Portney and Watkins (Book called 'Foundations of Clinical Research).

>

> Portney and Watkins state that you can use any variable in the

> analysis.

>

> 'The specifice facets included in the demoninator will vary,

> depending on whether rater, occasions, or some other facetis the

> variable of interest in the reliability study. For example, if we

> include rater as a facet, then the total observed variance, which, of

> course, does not include direct estimates of true variabce (as this is

> unknown). Theoretically, however, we can estimate true score variance

> by looking at the difference between observed variance among subjects

> and error variance. These estimates can be derived from an analysis

> of varaince.'

>

> The example given in the text has four raters evaluating six subjects

> on a single day. In calculating the ICC, the between subjects mean

> square, error mean square and between raters mean square are taken

> from a repeated measures ANOVA. We can follow this and reproduce

> it quite easily by doing a repeated measures ANOVA with a single

> effect of, RATER. Now in my students study, there are main effects

> of both RATER and SESSION.. We can't decide which mean square terms

> to use because there are also interactions involved.

>

> We could simplify it by calculating the ICCs separately for test 1

> and test 2 but then lose the effect of session. Perhaps the study

> isn't suited for ICCs.

>

> Anyone have any advice on how to approach this, or references that I

> can point them to?

>

> Thank you,

> David

>

> David M. Hooper, Ph.D.

> Department of Rehabilitation Sciences

> University of East London

> Romford Road

> London E15 4LZ

> Phone 0181-590-7000 (4025)

> d.m.hooper@uel.ac.uk

>

> -------------------------------------------------------------------

> To unsubscribe send UNSUBSCRIBE BIOMCH-L to LISTSERV@nic.surfnet.nl

> For information and archives: http://www.lri.ccf.org/isb/biomch-l

> -------------------------------------------------------------------

Dr. Hooper:

In their book, Psychometric Theory, Nunnally and Bernstein (1994)

offer several good alternative methods for measuring reliability...see

if their suggestions help.

Steve Page, Ph.D.

The Kessler Institute for Rehabilitation

Dr David Michael Hooper wrote:

Hello,

I am posting this on behalf of some students of mine. They have

conducted a reliability study in which three raters tested and

re-tested a group of twelve subjects on two different days.

Currently, they are attempting to calculate an intraclass correlation

coefficient (ICC) as described by Shrout and Fleiss (1979) and

Portney and Watkins (Book called 'Foundations of Clinical Research).

Portney and Watkins state that you can use any variable in the

analysis.

'The specifice facets included in the demoninator will vary,

depending on whether rater, occasions, or some other facetis the

variable of interest in the reliability study. For example, if

we

include rater as a facet, then the total observed variance, which,

of

course, does not include direct estimates of true variabce (as this

is

unknown). Theoretically, however, we can estimate true score

variance

by looking at the difference between observed variance among subjects

and error variance. These estimates can be derived from an analysis

of varaince.'

The example given in the text has four raters evaluating six subjects

on a single day. In calculating the ICC, the between subjects

mean

square, error mean square and between raters mean square are taken

from a repeated measures ANOVA. We can follow this and reproduce

it quite easily by doing a repeated measures ANOVA with a single

effect of, RATER. Now in my students study, there are main effects

of both RATER and SESSION.. We can't decide which mean square

terms

to use because there are also interactions involved.

We could simplify it by calculating the ICCs separately for test

1

and test 2 but then lose the effect of session. Perhaps the study

isn't suited for ICCs.

Anyone have any advice on how to approach this, or references that I

can point them to?

Thank you,

David

David M. Hooper, Ph.D.

Department of Rehabilitation Sciences

University of East London

Romford Road

London E15 4LZ

Phone 0181-590-7000 (4025)

d.m.hooper@uel.ac.uk

-------------------------------------------------------------------

To unsubscribe send UNSUBSCRIBE BIOMCH-L to LISTSERV@nic.surfnet.nl

For information and archives: http://www.lri.ccf.org/isb/biomch-l

-------------------------------------------------------------------

In their book, Psychometric Theory, Nunnally and Bernstein (1994) offer

several good alternative methods for measuring reliability...see if their

suggestions help.

Steve Page, Ph.D.

The Kessler Institute for Rehabilitation

Dr David Michael Hooper wrote:

> Hello,

>

> I am posting this on behalf of some students of mine. They have

> conducted a reliability study in which three raters tested and

> re-tested a group of twelve subjects on two different days.

> Currently, they are attempting to calculate an intraclass correlation

> coefficient (ICC) as described by Shrout and Fleiss (1979) and

> Portney and Watkins (Book called 'Foundations of Clinical Research).

>

> Portney and Watkins state that you can use any variable in the

> analysis.

>

> 'The specifice facets included in the demoninator will vary,

> depending on whether rater, occasions, or some other facetis the

> variable of interest in the reliability study. For example, if we

> include rater as a facet, then the total observed variance, which, of

> course, does not include direct estimates of true variabce (as this is

> unknown). Theoretically, however, we can estimate true score variance

> by looking at the difference between observed variance among subjects

> and error variance. These estimates can be derived from an analysis

> of varaince.'

>

> The example given in the text has four raters evaluating six subjects

> on a single day. In calculating the ICC, the between subjects mean

> square, error mean square and between raters mean square are taken

> from a repeated measures ANOVA. We can follow this and reproduce

> it quite easily by doing a repeated measures ANOVA with a single

> effect of, RATER. Now in my students study, there are main effects

> of both RATER and SESSION.. We can't decide which mean square terms

> to use because there are also interactions involved.

>

> We could simplify it by calculating the ICCs separately for test 1

> and test 2 but then lose the effect of session. Perhaps the study

> isn't suited for ICCs.

>

> Anyone have any advice on how to approach this, or references that I

> can point them to?

>

> Thank you,

> David

>

> David M. Hooper, Ph.D.

> Department of Rehabilitation Sciences

> University of East London

> Romford Road

> London E15 4LZ

> Phone 0181-590-7000 (4025)

> d.m.hooper@uel.ac.uk

>

> -------------------------------------------------------------------

> To unsubscribe send UNSUBSCRIBE BIOMCH-L to LISTSERV@nic.surfnet.nl

> For information and archives: http://www.lri.ccf.org/isb/biomch-l

> -------------------------------------------------------------------

Dr. Hooper:

In their book, Psychometric Theory, Nunnally and Bernstein (1994)

offer several good alternative methods for measuring reliability...see

if their suggestions help.

Steve Page, Ph.D.

The Kessler Institute for Rehabilitation

Dr David Michael Hooper wrote:

Hello,

I am posting this on behalf of some students of mine. They have

conducted a reliability study in which three raters tested and

re-tested a group of twelve subjects on two different days.

Currently, they are attempting to calculate an intraclass correlation

coefficient (ICC) as described by Shrout and Fleiss (1979) and

Portney and Watkins (Book called 'Foundations of Clinical Research).

Portney and Watkins state that you can use any variable in the

analysis.

'The specifice facets included in the demoninator will vary,

depending on whether rater, occasions, or some other facetis the

variable of interest in the reliability study. For example, if

we

include rater as a facet, then the total observed variance, which,

of

course, does not include direct estimates of true variabce (as this

is

unknown). Theoretically, however, we can estimate true score

variance

by looking at the difference between observed variance among subjects

and error variance. These estimates can be derived from an analysis

of varaince.'

The example given in the text has four raters evaluating six subjects

on a single day. In calculating the ICC, the between subjects

mean

square, error mean square and between raters mean square are taken

from a repeated measures ANOVA. We can follow this and reproduce

it quite easily by doing a repeated measures ANOVA with a single

effect of, RATER. Now in my students study, there are main effects

of both RATER and SESSION.. We can't decide which mean square

terms

to use because there are also interactions involved.

We could simplify it by calculating the ICCs separately for test

1

and test 2 but then lose the effect of session. Perhaps the study

isn't suited for ICCs.

Anyone have any advice on how to approach this, or references that I

can point them to?

Thank you,

David

David M. Hooper, Ph.D.

Department of Rehabilitation Sciences

University of East London

Romford Road

London E15 4LZ

Phone 0181-590-7000 (4025)

d.m.hooper@uel.ac.uk

-------------------------------------------------------------------

To unsubscribe send UNSUBSCRIBE BIOMCH-L to LISTSERV@nic.surfnet.nl

For information and archives: http://www.lri.ccf.org/isb/biomch-l

-------------------------------------------------------------------