To all who have contributed so far - I believe that this is a
valuable discussion of an area too often neglected in motion
analysis, and the quality of the responses has been excellent.
I am not going to add to the discussion about the relative merits of
different marker systems, but I would like to comment on how these
systems are being evaluated/compared. In particular, I would like to
emphasize one point (previously made by David Smith): Repeatability
is necessary but not sufficient to establish accuracy. It implies
good precision, but cannot address the issue of measurement bias.
If one considers the nature of errors introduced by skin markers, the
potential for biased measurements is significant. Displacements of
surface markers due to muscle contraction, skin motion due to change
of joint angle or tissue "bounce" due to impact forces (e.g.
heelstrike) are not random - they are highly correlated with movement
patterns. The errors introduced by these displacements can be quite
repeatable within each subject, but can introduce significant bias
that varies from subject to subject (depending on body style, injury/
disease state, etc). We can generally agree that reduced within-
subject variability (across trials/test sessions/investigators) is a
"good thing". However, in the absence of a gold standard for
comparison, we simply do not know how accurate our measurements
really are. This has been an achilles heel of 3D motion analysis
throughout its history. I remember working with Jim Gage at the
Newington Children's Hospital gait lab back in the early-mid 1980's
(using a marker system that might be considered a predecessor to the
Helen Hayes system). We could generate repeatable knee internal/
external and ab/adduction plots for our patients (mostly children
with cerebral palsy), but ended up removing them from our clinical
printouts because we did not feel confident about their accuracy. I
routinely see these plots now; clearly the motion analysis technology
has improved, but how much more do we know about their accuracy than
we did 20+ years ago?
So, I think the focus of the relative repeatability of different
marker systems misses the point to some extent. It would be far more
valuable to understand the effect of marker sets on the "confidence
interval" of the measurements we use for research and clinical
decision-making, particularly as they apply to the specific motions
and subject populations each of us work with. It is simplistic to
assume that there is one marker set that is the "best", even if we
consider only clinical gait analysis. For example, a functional
calibration approach (such as that described by Richard Baker) could
be the best solution for a relatively healthy population, but might
be poorly suited for subjects with limited range of motion in one or
more joints (as suggested by Ton van den Bogert ,e.g. patients with
CP, stroke, arthritis, etc). Landmark-based joint center location
may work better for these populations, but probably has larger errors
with obesity or skeletal deformity. The type of movement task also
changes the nature of errors (consider skin displacement during
running vs. slow walking), and might affect the relative performance
of different systems.
Trying to summarize where I am going with this (I almost forgot by
now), I think the search for the "best" marker set is a bit of a
quest for the holy grail. The obvious weaknesses of skin-based
motion measurement for assessing joint kinematics have been
demonstrated in multiple studies, and no marker placement or analysis
method can completely overcome them. Perhaps the best we could hope
for would be to agree that there are multiple approaches, each with
their own merits. They should be selected based on consideration of
the specific application, rather than strictly by convenience ("came
with the system" or "that is the one I know"). Equally important, we
should consider the inherent limitations of this technology, to avoid
interpreting artifact as science (even if it is repeatable). Many
types of studies may be relatively insensitive to marker set
selection. For others, there may be specific methods that are
clearly advantageous. However, for studies of certain disorders and
subject populations, there simply may be no acceptable surface marker
solution.
Then, how do we answer the original question about marker set
selection? I do not think we have the data to make these decisions.
But, I believe that one key step is incorporating true studies of
accuracy (instead of just repeatability) into the decision-making
process. This is an area where emerging technologies (e.g. dynamic
MRI or biplane radiography) have the potential to make significant
contributions to our understanding of the estimation of skeletal
motion from skin markers.
Thanks to Nancy Denniston for initiating this topic, and I look
forward to continued discussion/debate.
Scott Tashman
___________________________
Scott Tashman, Ph.D.
Associate Professor
Director, Biodynamics Laboratory
Dept. of Orthopaedic Surgery
University of Pittsburgh
Orthopaedic Research Laboratories
Rivertech, 3820 South Water St.
Pittsburgh, PA 15203
Phone: office 412-586-3950
fax 412-586-3979
valuable discussion of an area too often neglected in motion
analysis, and the quality of the responses has been excellent.
I am not going to add to the discussion about the relative merits of
different marker systems, but I would like to comment on how these
systems are being evaluated/compared. In particular, I would like to
emphasize one point (previously made by David Smith): Repeatability
is necessary but not sufficient to establish accuracy. It implies
good precision, but cannot address the issue of measurement bias.
If one considers the nature of errors introduced by skin markers, the
potential for biased measurements is significant. Displacements of
surface markers due to muscle contraction, skin motion due to change
of joint angle or tissue "bounce" due to impact forces (e.g.
heelstrike) are not random - they are highly correlated with movement
patterns. The errors introduced by these displacements can be quite
repeatable within each subject, but can introduce significant bias
that varies from subject to subject (depending on body style, injury/
disease state, etc). We can generally agree that reduced within-
subject variability (across trials/test sessions/investigators) is a
"good thing". However, in the absence of a gold standard for
comparison, we simply do not know how accurate our measurements
really are. This has been an achilles heel of 3D motion analysis
throughout its history. I remember working with Jim Gage at the
Newington Children's Hospital gait lab back in the early-mid 1980's
(using a marker system that might be considered a predecessor to the
Helen Hayes system). We could generate repeatable knee internal/
external and ab/adduction plots for our patients (mostly children
with cerebral palsy), but ended up removing them from our clinical
printouts because we did not feel confident about their accuracy. I
routinely see these plots now; clearly the motion analysis technology
has improved, but how much more do we know about their accuracy than
we did 20+ years ago?
So, I think the focus of the relative repeatability of different
marker systems misses the point to some extent. It would be far more
valuable to understand the effect of marker sets on the "confidence
interval" of the measurements we use for research and clinical
decision-making, particularly as they apply to the specific motions
and subject populations each of us work with. It is simplistic to
assume that there is one marker set that is the "best", even if we
consider only clinical gait analysis. For example, a functional
calibration approach (such as that described by Richard Baker) could
be the best solution for a relatively healthy population, but might
be poorly suited for subjects with limited range of motion in one or
more joints (as suggested by Ton van den Bogert ,e.g. patients with
CP, stroke, arthritis, etc). Landmark-based joint center location
may work better for these populations, but probably has larger errors
with obesity or skeletal deformity. The type of movement task also
changes the nature of errors (consider skin displacement during
running vs. slow walking), and might affect the relative performance
of different systems.
Trying to summarize where I am going with this (I almost forgot by
now), I think the search for the "best" marker set is a bit of a
quest for the holy grail. The obvious weaknesses of skin-based
motion measurement for assessing joint kinematics have been
demonstrated in multiple studies, and no marker placement or analysis
method can completely overcome them. Perhaps the best we could hope
for would be to agree that there are multiple approaches, each with
their own merits. They should be selected based on consideration of
the specific application, rather than strictly by convenience ("came
with the system" or "that is the one I know"). Equally important, we
should consider the inherent limitations of this technology, to avoid
interpreting artifact as science (even if it is repeatable). Many
types of studies may be relatively insensitive to marker set
selection. For others, there may be specific methods that are
clearly advantageous. However, for studies of certain disorders and
subject populations, there simply may be no acceptable surface marker
solution.
Then, how do we answer the original question about marker set
selection? I do not think we have the data to make these decisions.
But, I believe that one key step is incorporating true studies of
accuracy (instead of just repeatability) into the decision-making
process. This is an area where emerging technologies (e.g. dynamic
MRI or biplane radiography) have the potential to make significant
contributions to our understanding of the estimation of skeletal
motion from skin markers.
Thanks to Nancy Denniston for initiating this topic, and I look
forward to continued discussion/debate.
Scott Tashman
___________________________
Scott Tashman, Ph.D.
Associate Professor
Director, Biodynamics Laboratory
Dept. of Orthopaedic Surgery
University of Pittsburgh
Orthopaedic Research Laboratories
Rivertech, 3820 South Water St.
Pittsburgh, PA 15203
Phone: office 412-586-3950
fax 412-586-3979