Announcement

Collapse
No announcement yet.

3DMA reliability

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • 3DMA reliability

    This post relates to Chris Kirtley’s comment “…it seems to me the field has not progressed much in that last 10 years” as well as a comment on poor reliability and clinical 3D gait analysis (Thread: “Best web-site for interaction with doctors engaged with human gait disorder”). It also follows on from my previous posts which presented and understanding of axes-misalignment, 3DMA methods and normative gait data. The later highlighted to me the lack of awareness of the reliability of different 3DMA methods. To put some perspective on the comparative reliability of different methods and the progress made over the last 20+ years in 3DMA I have presented the results of a systemic review from a 4th year student research project.

    Methods include traditional anatomical land mark based, KAD, Optimization methods and rigid fixation devices with and without functional joint centre methods. A summary Excel spread sheet is attached and can be found at:
    Google drive folder: 3DMA Reliability
    https://drive.google.com/open?id=0Bw...nBIWENsakxfcFk
    or excel file: ResultsTableSytematic Review brief.xlsx
    https://drive.google.com/open?id=0Bw...1NxeDNWQndsNnc
    Google drive preview does not show any plots.

    Key points:
    1. Intra-session (same marker placement) reliability for the majority of studies was very poor and unacceptable for non-sagital rotations of the hip, knee and ankle. Acceptable intra-session reliability has been produced for a number of studies for a few joint angles, irrespective of type of method, but this is the exception.
    2. Large variability is seen within each type of study design across intra-session, inter session and inter-examiner results. Therefore no design can be considered reliable and is highly dependent on the implementation of the method.

    There is something fundamentally wrong with the processes undertaken to derive joint angle data for the majority of reliability studies. So much so that often the gait variability within session is far larger than the total variability expected between subjects during normal gait.
    Comparison of methods:
    1. Traditional marker based anatomical landmark (AL) based methods of the 80’s and 90’s (Kit Vaughan, mod-HH, VCM, PiG) were known to suffer from cross-talk (axes misalignment and non-linear errors) and poor reliability in the non-sagital knee joint angle data.
    2. The KAD was introduced (90’s ?) to help define thigh medio-lateral (knee flex/ext) axis alignment to improve reliability and reduce cross talk. However it had limited success with marginal improvements in reliability in gait kinematics over the traditional marker based methods. The influence of the examiner’s interpretation and implementation of the method and variability in KAD placement was still a limiting factor.
    3. Optimisation approaches of the 2000’s saw a variety of methods to align medio-lateral knee axes based on minimizing knee abd/add cross talk. Including PCA or best fit to an ideal knee abd/add curve during gait. Although only a few studied and varying in their results between approaches, these methods saw a significant improvement in reliability over KAD or traditional AL methods.


    From here the methods have gone backwards.
    1. The T3DGait (2007) saw a return to the anatomical landmarks and minimal marker numbers of the traditional AL methods.
    2. Function methods for joint centres (2010’s) have also seen a return, but have produced no improvement in reliability over traditional AL methods, adding time and complexity to the analysis for no benefit. Although both KAD and function methods are unreliable, it could argued that for inter-session studies you should stick with the KAD method as it has a more controlled approach producing less variability between reliability studies. The function method lacks a clearly defined protocol in terms of planes of movement, RoM, or controlling pelvic motion, potentially leading to larger variability in outcome measures between reliability studies.
    3. A resent addition is the IK model for gait analysis. Although there are no reliability studies, published comparative gait data, and knowledge of factors that affect reliability and validity, indicate that this is possibly the worst method presented for 3D gait analysis. Combining the worst in 3DMA including whole body global optimization, 1 df knee joint, RFD on the thigh and functional joint centers.


    These later methods have all ignored previous literature and work on axes misalignment and non-linear error (cross talk), with no attempt to assess or correct for axes mis-alignment.

    Something that has been long missing in the 3DMA literature is criteria against which to measure reliability outcome measures. This is part of the reason why every reliability study that I have read concludes that their methods are reliable and suitable for 3DMA gait analysis, when clearly they are not. This conclusion has been based on a vaguely chosen value (such as ICC or CMC >0.8) that has no relevance to gait curves and is applied to all joint angle data despite it being known that the ROM varies considerably between different joint df’s (from 6 to 60 degs.).

    I will end with an important consideration on the progress of 3DMA and a future trend. There was a huge leap in 3DMA technology in the mid 90’s which saw the emergence of the multi-camera, 3D tracking, real-time gait technology we see today. The 3DMA methods (marker sets, protocols etc.) have been left behind and have in recent years (in my opinion) gone backwards through a lack of critical thought, understanding and research into the underlying methods. There is currently another boom happening with wireless inertial sensors and their application to 3DMA. These come with claims of accuracy and reliability and are being promoted as tools for research and clinical gait analysis. In light of what I have presented in comparative reliability as well as previous posts on marker based methods, understanding of axes misalignment and normative gait data, these claims should be taken with caution. This is where I feel that 3DMA societies/groups and those conducting fundamental research into 3DMA methods need to step up to provide leadership in an area of 3DMA that is again changing rapidly with the introduction of new technology. If as Paul Devita says it wants to become the “breakthrough science of the 21st century”. To do this however requires a huge step forward in the science and current understanding of 3DMA methods, validity and reliability.

    Allan
    Attached Files

  • #2
    Re: 3DMA reliability

    The paper here by Todorov is a good read for anyone interested in these topics:

    https://www.ncbi.nlm.nih.gov/pubmed/18018688

    He proposed a probabilistic (Bayesian) "sensor fusion" approach to motion analysis: we have data from various sources (e.g. marker positions, GRF, IMUs, EMG), we know they all have various errors, what is the most likely motion given the data, what we know about the system, and what we expect the result to be?

    The math is more complicated than traditional motion analysis methods but I think it's an approach that has a lot of promise.

    Ross

    Comment


    • #3
      Re: 3DMA reliability

      I would like to echo Ross' comment enthusiastically. Fusion of multiple sensors with a dynamic model will produce an optimal estimate in the Bayesian sense. These methods can (and must) be tuned according to how much the sensors are trusted and how much the model is trusted. If too much trust is placed on the model, bias is introduced. This is related to Allan Carman's concerns about inverse kinematics, which assumes zero error in the kinematic model.

      Such methods (Kalman filters) are already used inside inertial sensing systems for motion capture, but the dynamic models are still quite simple, not using multibody dynamics and muscle dynamics yet. So there still is a lot of room for improvement! Kalman filters are real-time and recursive. Off-line sensor fusion can be done with trajectory optimization approaches, as we described in van den Bogert et al., Procedia IUTAM 2011 (https://www.ncbi.nlm.nih.gov/pubmed/22102983).

      One elegant aspect of Todorov's work is that certain model parameters (such as bone lengths) are estimated simultaneously. The model includes (stochastic) differential equations for those parameters which basically state that they can change over time, but only slowly.

      Because a dynamic model is used, the analysis can always continue, even if no data is available at all! The dynamic model will simply simulate the motion forward from the last known state of the system, until data becomes available again and sensor fusion resumes. Also, low-pass filtering is no longer needed, this happens automatically (and optimally) because of the dynamic model. However, keep in mind that the quality of results depends on making good assumptions about the accuracy of the data vs. accuracy of the model. If the model and measurements agree well, this tuning is usually not very critical.

      Herman Woltring already envisioned all of this in 1981 (https://www.ncbi.nlm.nih.gov/pubmed/7240290) in a letter to the editor in Journal of Biomechanics. His final paragraph says:
      "A generalization of these approaches is to define a state-space model of rigid body positions, attitudes and their (first and second) derivatives, and to apply optimal prediction, filtering, and smoothing techniques (see, e.g. Gelb, 1974). This has the additional advantages of simultaneous derivative estimation and of automatic interpolation in the case of partial or complete data loss during finite time intervals, for example, due to loss of sight by one camera. Unfortunately, such more universal procedures are numerically expensive, and they require familiarity with contemporary developments in the realm of optimal control, system identification, and parameter estimation."

      I have had Gelb's book for a long time, but it is not an easy read. Recently, my favorite source is my colleague Dan Simon's book "Optimal State Estimation: Kalman, H-Infinity, and Nonlinear Approaches" (Wiley, 2006). Matlab code is available at http://academic.csuohio.edu/simond/estimation/

      Wikipedia gives a good general introduction and history: https://en.wikipedia.org/wiki/Kalman_filter

      Ton van den Bogert

      Comment


      • #4
        Re: 3DMA reliability

        An issue with the approach in the Todorov paper to me (this may just reflect my lack of understanding, I'm not an expert in Bayesian methods) is that it modeled all measurement errors in the pose estimation problem as Gaussian. I don't think that is a good model of, for example, soft tissue artifact. Maybe this is not that important because it is where Ton's comment on "trust" comes in; this would be a situation where trust in both the model and the sensor would be low (assuming the goal is to compute skeletal pose).

        Ross

        Comment


        • #5
          Re: 3DMA reliability

          I think that is a valid criticism.

          If you assume Gaussian errors (white noise), the maximum likelihood estimate becomes a least-squares estimate (through minimizing the minus log of likelihood), and this is convenient for numerical methods. Standard inverse kinematics uses least-squares, so this is implicitly based on assuming Gaussian error. Standard IK uses a static model, so there is no place where you can insert your knowledge that the soft tissue artifact (STA) is a smooth function of time.

          I am speculating now, but I think that STA will be rejected better if that knowledge of smoothness is used in the estimation process. In the Bayesian framework with a dynamic model, you could model the STA with additional differential equations which can be quite simple. For instance, the model could assume that the STA is not white noise but an integral of white noise, which is smoother. The estimation equations are still the same, only the model has extra state variables and extra equations, the matrices are larger. Todorov's work can be extended this way. Tuning again is important. You have to assume the amplitudes of these noise sources.

          We're getting somewhat off the topic that Allan started. I totally agree with Allan that we are long overdue for making a huge step forward in how we are doing things in human movement analysis. And I agree with Ross that these Bayesian approaches have the potential to make that happen, and this needs to get more attention.

          There is however, a potential difference of opinion. If Allan is already critical of standard IK, he may not be inclined to go in the direction of relying even more on models which requires even more assumptions.

          The future directions will also depend on applications. In research labs, we can possibly be very successful by improving our measuring techniques and rely less on models. We can do stereo fluoroscopy to eliminate STA, in vivo laser diffraction to measure muscle contraction, and so on. In real world applications (clinical and sports) we need to reduce cost and time. Non-invasive and cheap sensors, and real time estimation with a Bayesian approach will do that, but this requires sophisticated models. Maybe there will be (or needs to be) a divergence between these two worlds so they hardly recognize each other anymore. I tend to favor a more model based approach even for research labs. Modeling requires you to have an understanding of how a system works, so you get better science. The use of less expensive measuring techniques is another benefit, enabling good research in places where budgets are limited. I should add that this opinion is influenced by my research interest in dynamics and control. For orthopedics and joint mechanics, the preferences would be different.

          Ton

          Comment


          • #6
            Re: 3DMA reliability

            This is all very interesting... but it seems to me the problem is not so much instrumentation accuracy as biological variability and skin movement artefact. I would have thought these are the "rate limiting step" of improving reliability?

            Comment


            • #7
              Re: 3DMA reliability

              These are all wise words on a very critical issue in studying human movement and the biomechanics of the musculoskelatal system (MSS) in relevant human tasks.
              My 5 cents are to stress , what Chris mentioned. Borelli started to consider the human MSS as a system of mechanical links. However, it is a model: the issue here is not simply how to measure the mechanics of a ground truth, obscured by noise (gaussian or otherwise). And although I embrace innovative solutions coming from sensor fusion, advanced IK etc. we must realize that biological variability between humans, and within humans performing a task, is inherent.
              Anatomical calibration (with its imperfections) and this inherent/biological variability of motor control and MSS morphology will set limits to what can be achieved. So the question remaining is what is it that we really want to know, in other words what are the parameters that would serve the questions that are asked to our field, the biomechanics of movement. That is a cross disciplinary question and not easy to answer of course.
              Nevertheless, I hope this gives some context to this interesting discussion.

              Comment


              • #8
                Re: 3DMA reliability

                I think that this is a good discussion, and long overdue. When I first became involved in 3DMA, initially supporting a 3D system installed at Shelly Simon's lab in Boston, and later installing and supporting 3D systems used with the Helen Hayes Software, the question on everyone's mind was, "Is it accurate?" Verifying and demonstrating the accuracy and synchronization of the 3D data, force plates and EMG systems was an essential step in the 3DMA installation. Once the hardware was "proven" we moved on to working with the clinical model and use of the software.

                Something that concerns me these days is that, in discussions with people entering the field, there seems to be an assumption that all the hardware works, that everything is synchronized, and that accuracy issues are a thing on the past. Nobody seems to understand how to look critically at their data any longer, or understands how to test and verify that the 3DMA environment is accurately functional. Twenty years ago, if my child had an issue that required clinical gait analysis, I would have been happy to send her to any number of labs for evaluation - these days I'd be reluctant, and would insist on reviewing the lab, and the results, myself.

                Comment


                • #9
                  Re: 3DMA reliability

                  I'll add/clarify that although he refers to it as "sensor noise" in the paper, "noise" in Todorov's approach is not a model of any specific source of error or complexity in the traditional skeletal pose estimation problem, e.g. marker reconstruction, soft tissue artifact, modeling assumptions, biological variance, etc. Noise in this approach represents all of those factors, i.e. any part of the problem that meaningfully affects how the measurement relates to the unmeasured pose of the skeleton.

                  A strength of the approach is that it is possible to include prior knowledge or expectations about the influence of those factors when estimating the pose.

                  Related to Edmund's comment, I think the field would benefit from having standards on lab accuracy that are reported with studies, like standard tests to verify the reconstruction accuracy of a MoCap setup, the alignment of force plates and cameras, etc. There is a bit of literature on this, e.g.:

                  https://www.ncbi.nlm.nih.gov/pubmed/12770634

                  Ross

                  Comment


                  • #10
                    Re: 3DMA reliability

                    Thank you for the responses.

                    My initial post was aimed at the methods used to analyze 3D lower limb kinematics, in particular clinical gait analysis but is equally applicable to research. Where I see a lack of knowledge of validity, reliability, and limitations of commonly used methods, as well as a lack of critical thought about the data produced. There has been a considerable amount of research on factors affecting 3DMA reliability but feel this research has only scratched the surface and the full implication of axes misalignment on validity and reliability and implication to the design of 3DMA methods has not been fully realized. Hence the leap forward needed in 3DMA methods that are used routinely the lab.

                    Jaap: … although I embrace innovative solutions coming from sensor fusion, advanced IK etc. we must realize that biological variability between humans, and within humans performing a task, is inherent. Anatomical calibration (with its imperfections) and this inherent/biological variability of motor control and MSS morphology will set limits to what can be achieved. So the question remaining is what is it that we really want to know, in other words what are the parameters that would serve the questions that are asked to our field, the biomechanics of movement. That is a cross disciplinary question and not easy to answer of course.

                    Yes, there are biological variations in humans and within humans performing a task as well as limitations in <current> methods that limit what can be achieved. However, my questions as to what do we really want to know is a simpler: To measure 3D joint rotations of the hip, knee and ankle during gait. This is for the same person within the same sessions or across repeated sessions. So far this has been done poorly with no similarity in what has been presented in the literature as normal gait kinematics as well as reliability studies that have all shown poor reliability with no consistency between different implementations of the same method.

                    Ross
                    : I think the field would benefit from having standards on lab accuracy that are reported with studies, like standard tests to verify the reconstruction accuracy of a MoCap setup, the alignment of force plates and cameras, etc. There is a bit of literature on this…

                    As Ross indicated, standards have been proposed previously relating to lab set-up and accuracy of the 3D system to reproduce 3D marker positions, and synchronization of different measurement systems is important. As Chris mentions a limiting factor to 3DMA is treatment of STA or more generally axes misalignment and the static and dynamic components. However there are many other factors such as filtering, placement and number of markers and joint df’s that influence validity and reliability. This may take the form as a set of guidelines relevant to different approaches with the standards being ultimately the measured validity and reliability of the joint angle data produced by the examiners/labs methods relative to criteria outcome measures.

                    I was not aware of the Todorov (2007) article and I have never implemented a Kalman filter so my understanding comes from a theoretical perspective. One application of the Kalman filter is to smooth noisy 2D and 3D measured or sensor data. The filter uses a predictor-corrector approach where the correction is based on the weighted difference between the modelled and predicted sensor measurements.

                    The predicted state variables (position, velocity and acceleration) describing the system at the current point in time, vector X(i), is given by known previous state variables, vector X(i-1) and equations of motion, prediction matrix A. Similarly, the covariance matrix at the current point in time P(i) is predicted using known uncertainties (normal distribution) in state variables (variances, σ^2) in matrix P(i-1) at the previous point in time and matrix A.

                    X
                    (i) = A.X(i-1)
                    P(i) = A.P(i-1).AT

                    The predicted sensor measurements, vector Z’, are obtained via the transformation matrix H describing the relationship between state variables and the sensor data. The units of the sensor measurements and state variables are not necessarily the same. The uncertainties in the predicted sensor measurements, covariance matrix E0, are also given via matrix H. While known uncertainties in sensor measurements (variances, σ^2) are given by covariance matrix R.

                    Z
                    ’ = H.X(i)
                    E0 = H.P(i).HT
                    E1 = R

                    The smoothed sensor measurements, vector Z* is given by the weight difference between recorded sensor measures, vector Z, and predicted sensor measurements, vector Z’.

                    Z
                    * = Z’ + K [ZZ’] or H.X*(i) = H.X(i) + K.[Z - H.X(i)]

                    With vector K the Kalman Gain, containing the weighting for each sensor measurement.
                    K = E0/(E0+E1) = [H.P(i).HT]/[ H.P(i).HT + Ri]

                    If R -> [0] then K -> [1] and if R -> [inf] or P -> [0] then K -> [0]

                    The gain K reflects ‘how much the sensors are trusted and how much the model is trusted’ but based on known uncertainties (variance, σ^2) of the model predicted measurements (E0 =H.P(i).HT) and the actual sensors measurements (E1= Ri). If the uncertainties in predicted sensor measurements are equal to that of the sensor (E0 = E1) then K = 0.5 and the smoothed sensor measurements are the average of the predicted and actual sensor measurements. The filter will introduce a lag between the measured and smoothed sensor data and for the Kalman filter to work well you need a good knowledge of the variances (uncertainty in measurement) of both modeled state variables and measured sensor data.

                    The corresponding state variables, X*(i), and covariance (uncertainty) at the current point in time are (these are the X(i-1) and P(i-1) in the next time step):
                    X*(i) = X(i) + K.HT.[Z - H.X(i)]
                    P*(i) = P(i) + K. P(i)

                    Todorov (2007) minimised a cost function to derive X*(i). This the sum of squared difference between sensor measurement and predicted sensor measurement divided (normalized) by sensor variance (σ^2) and between predicted state variables and current state variables divided (normalized) by variance (σ^2) in state variables

                    F
                    (X*) = [ (ZH.X*)T. 1/R.(ZH.X*) ] + [ (X* - X)T. 1/P.(X* - X) ]

                    My first impression of the Todorov method was in agreement with Ross’s and Chris’s comments:

                    Ross: An issue with the approach in the Todorov paper to me (this may just reflect my lack of understanding, I'm not an expert in Bayesian methods) is that it modeled all measurement errors in the pose estimation problem as Gaussian. I don't think that is a good model of, for example, soft tissue artifact.
                    Chris: … but it seems to me the problem is not so much instrumentation accuracy as biological variability and skin movement artefact. I would have thought these are the "rate limiting step" of improving reliability?

                    A limitation of the Kalman filter method is modelling uncertainty in measurement as Guassian (random white noise with variance σ^2). This may be the case if you take a sensor (3D marker point, GPS or accelerometer) in isolation. However, in reconstructing anatomical (skeletal) segment axes from multiple surface markers the errors cannot be considered as purely random noise. When creating 3D marker paths sources of error include; errors in 3D point reconstruction, 3D tracking, mis-identification, gap filling and smoothing. While a dominant source of error in 3DMA is axes mis-alignment when reconstructing the underlying anatomical segment and includes errors when initially defining the axes and STA during the movement. Least squares accounts for random noise in 3D marker position and some of the non-rigidity between markers/sensors on a segment. However the dominant (limiting) errors are not random, are not known a prior and vary considerably in magnitude between segments, individuals, type of movement and within movement trials. These are also influenced by numerous factors including, numbers and placement of markers, soft tissue properties of the segment and analytical methods.

                    Another issue recognized by the author was the lack of convergence and avoiding local minimum in a modelling a large system as presented in multiple 3D body segments. In a simplified 7 segment model with predominantly 1df joints the author found convergence a problem and recommended 3D marker data only over combining 3D marker and accelerometer data.

                    Ross: …although he <Todorov> refers to it as "sensor noise" in the paper, "noise" in Todorov's approach is not a model of any specific source of error or complexity in the traditional skeletal pose estimation problem, e.g. marker reconstruction, soft tissue artifact, modeling assumptions, biological variance, etc. Noise in this approach represents all of those factors, i.e. any part of the problem that meaningfully affects how the measurement relates to the unmeasured pose of the skeleton.

                    No, the Kalman filter method as described does not correct for known errors in 3DMA such as offset errors when defining anatomical axes or soft tissue artefact during movement or errors in 3D marker paths. It uses knowledge of the measurement errors (variance describing the magnitude of random noise) in the modelled and sensor data purely as weighting in the prediction correction approach (which source of data the filter will put more emphasis on) when calculating the next iteration of smoothed sensor data and corresponding state variables. The approach does not account for or contain processes to correct errors of axes misalignment between reconstructed segment and the underlying anatomical segment. Static offset errors in axes alignment when initially defining segment axes are unknown. While for soft tissue artefact the best we can do is a good initial approximation of dynamic errors in axes alignment based on similar subjects, marker placement and movement patterns. These axes alignment errors are not known a-priori and must be established within the analytical process.

                    Ross: … I don't think that is a good model of, for example, soft tissue artifact. Maybe this is not that important because it is where Ton's comment on "trust" comes in; this would be a situation where trust in both the model and the sensor would be low (assuming the goal is to compute skeletal pose).

                    Axes misalignment through static offsets or dynamic (STA) is not just important it is critical! In this situation I don’t think ‘trust’ in either the modelled and measured sensor data is the correct way to view it. If the approach does not have sound checks in place for marker identification, gap filling and smoothing as well as assessing and correcting errors when defining the initial segment alignment and dynamic soft tissue artefact during movement then you are not going to be reconstructing the underlying anatomical segment with any degree of certainty. The calculated non-sagital joint rotations of the leg will unreliable and potentially meaningless.

                    Ton: I am speculating now, but I think that STA will be rejected better if that knowledge of smoothness is used in the estimation process. In the Bayesian framework with a dynamic model, you could model the STA with additional differential equations which can be quite simple. For instance, the model could assume that the STA is not white noise but an integral of white noise, which is smoother. The estimation equations are still the same, only the model has extra state variables and extra equations, the matrices are larger. Todorov's work can be extended this way. Tuning again is important. You have to assume the amplitudes of these noise sources.

                    As STA is not random, modelling STA on white noise will not eliminate STA. However, I agree that axes misalignment (both static and dynamic components) need to be included in the model if you are to produce valid and reliable anatomical based segments. The challenge will be how to assess and correct these errors when Kalman filter approach only considered the fit between modelled and measured sensor data (3D marker positions). Increasing model complexity and number of state variables will also add to the difficulty of complexity, convergence and local minima.

                    The treatment of error is the down fall of the traditional IK model. If all errors were random white noise, were of similar magnitude for every segment, were similar across movement trials, and we could accurately define joint centers that only had 3df then the traditional IK approach (global least squares with 3 df joints) would have merit.

                    Ton: “There is however, a potential difference of opinion. If Allan is already critical of standard IK, he may not be inclined to go in the direction of relying even more on models which requires even more assumptions”

                    Correct, I would avoid relying on assumption about magnitudes and nature of errors in defining and reconstructing segment location in 3DMA. Instead, understand the nature of those errors and the ways that they can be measured or deduced and corrected or accounted for. As mentioned previously errors in axes alignment offsets and due to STA need to be established within the methods. The Kalman filter approach does of have merit in combining multiple sensor information but does have limitations in its application to 3DMA. However, I also have to take a step back and, like my marker based treatment of axes misalignment and nonlinear errors in an inverse dynamics least squares approach, realize that it is a work in progress and present assumptions and limitations will be improved upon.

                    Cheers
                    Allan

                    Comment


                    • #11
                      Re: 3DMA reliability

                      Originally posted by Allan Carman View Post
                      Thank you for the responses.
                      No, the Kalman filter method as described does not correct for known errors in 3DMA such as offset errors when defining anatomical axes or soft tissue artefact during movement or errors in 3D marker paths.
                      Allan, maybe I am misunderstanding what you mean by "correct for", but my statement was not incorrect: "noise" in Todorov's approach does not represent any particular source of error. Rather, the noise is a necessary element to produce generative model that can potentially deal with many sources of error. The user chooses how to define the noise (Todorov seems to imply this choice is not critical although I am not sure of that).

                      Whether the approach corrects for errors due to axis alignment or STA or any other specific source of error is another question (but was not the statement I made). It is definitely attempts to account for those things, though:

                      "The problem is formulated in a probabilistic framework so as to handle multiple and unavoidable sources of uncertainty: sensor noise, soft tissue deformation and marker slip, inaccurate marker placement and limb measurement, and missing data due to occlusions"

                      For example, he discusses how STA can be included by defining correlations between sensor residuals and joint kinematics.

                      Ross

                      Comment


                      • #12
                        Re: 3DMA reliability

                        I agree that the Gaussian (white noise) representation of error or uncertainty is not specific to any one source of error, with a component of white noise present to varying degrees across different sources of error. I did not agree with Todorov's assertion that the Gaussian probabilistic description of uncertainty could handle multiple sources of error in 3DMA particularly STA and marker misplacement. Axes misalignment between the axes reconstructed from skin markers or sensors and the underlying anatomical axes cannot be considered random noise and do not fit into the Gaussian model. Axes misalignment errors (static offset or dynamic STA) are a major limiting factor to validity and reliability of 3DMA, but can be identified and described and therefore corrected to derive anatomical axes from external markers. The success of identifying and correcting axes alignment is up to debate but is a challenge that needs to be met.

                        Todorov: “The problem is formulated in a probabilistic framework so as to handle multiple and unavoidable sources of uncertainty: sensor noise, soft tissue deformation and marker slip, inaccurate marker placement and limb measurement, and missing data due to occlusions”

                        I think axes misalignment could be built into the Todorov model but not by the current limited Gaussian view of error in 3DMA. As suggested by Ton, by introducing additional unknowns into the predictive model (represented in the Kalman filter by matrix A). These may be constant unknowns (axes offsets) and variables (STA) representing the displacement the marker based axes from the anatomical. In this Kalman filter model the state variables of position, velocity and acceleration now represent the anatomical segment axes and the additional constants and variables describe the transformation to the marker based axes to then track measured 3D skin marker data with the addition of white noise.

                        Todorov: “We expect our methods to outperform alternative, which ignore uncertainties.”

                        The assertion that alternative methods ignore uncertainties is not correct, a marker cluster design with least squares approach accounts for random noise. Methods that attempt to measure axes misalignment (the most common approach being to minimize either knee add/abduction RoM or correlation with knee flex/ext in gait to adjust thigh medio-lateral axis) will outperform the Todorov method as currently presented. As the Todorov method does not include any assessment or correction for axes mis-alignment to the anatomical I would expect there to be little difference between an inverse dynamics least square approach that does not account for axes misalignment and the Todorov approach.

                        Cheers
                        Allan

                        Comment


                        • #13
                          Re: 3DMA reliability

                          I am contributing to this discussion now because we recently provided tools to estimate the repeatability of gait analysis experiments, using Excel, Matlab and R. Please forgive this shameless self-publicity. The tools were linked to a publication that discussed the details of the calculations of the variance components.

                          About the broader topic of accuracy and repeatability in gait analysis, I believe one way to move forward would be to clarify our terminology. For example, I sometimes read the words ‘soft tissue artefact (STA)’ used in two different contexts. In most cases STA refers to the errors, during dynamic activities, due to the relative displacement of the skin and other soft tissues with respect to the bones the markers are supposed to follow. However, I sometime read the words STA to describe the fact that there are soft tissues (skin, muscle, fat, etc.) between the markers and the bony landmarks, for example at the pelvis over the ASIS, and that the presence of these soft tissues may bias our estimate of the bone positions. This is problematic because the two type of errors are very different, and the means to estimate the size, and the variability, of the errors are also different.

                          I’d like the distinction between calibration and tracking errors/accuracy/repeatability to be more explicit. Calibration is the process of registering the skin markers to the subject’s skeleton. Tracking is the process of estimating the ‘change in position’ of the segments/joints between the calibration and each instant (frame) during a dynamic activity.
                          The best way to estimate the accuracy of the calibration is to use medical imaging able to provide at the same time, and in the same calibration position (e.g. standing for gait analysis), the position of the markers and the bones (i.e. joint centres and anatomical axes systems) simultaneously. Improved calibration accuracy is becoming more and more important with the democratization of musculoskeletal modelling, because errors in joint centres/axis systems position may have larger impact on muscle forces/joint contact forces than on kinematics or kinetics. Musculoskeletal modelling may also add the necessity to include subject-specific bone shape and muscle attachment sites when it departs from what is considered ‘normal’ anatomy. Calibration errors may be systematic (i.e. biased, for example the position of the HJC) and variable.

                          Similarly, the best way to estimate the tracking accuracy is to combine medical imaging (e.g. bi-plane fluoroscopy) with motion capture, or using bone pins. Tracking errors are marker location, task and subject specific.
                          Given the same subject doing the same thing (e.g. walking) with markers roughly in the same positions on the skin, it is fair to assume the tracking errors will be the same. Therefore, in an experiment that estimate the variability of walking kinematics in the same subject during the same session (e.g. here) it is fair to assume that the variability measured is the subject variability, i.e. the biological variability mentioned in a previous post. Similarly, in an experiment that estimate the variability of walking in the same subject but during different sessions it is fair to assume that the variability measured has multiple components (i.e. inter-trials, inter-sessions, and inter-assessors if the sessions where performed by different sessions, cf. here again) but that these are mostly calibration variability too, not tracking. So I think it is important to notice that we have all the means to estimate calibration accuracy and variability.

                          Tracking errors (basically STA) are more difficult to estimate, and there are international groups working together on this topics (the Soft Tissue Artifact Propagation Attenuation Group, STAPAG, and a special issue coming up in Journal of Biomechanics).
                          I believe that in gait analysis specifically, the tracking errors may be secondary to calibration errors thanks to slow movement and reasonably small joint range of movement, and that calibration errors are preventable (using medical imaging, e.g. freehand 3D ultrasound (hip, knee) or EOS (hip, knee)) or can be estimated.

                          Comment


                          • #14
                            Re: 3DMA reliability

                            Morgan,

                            Originally posted by msangeux93 View Post
                            I am contributing to this discussion now because we recently provided tools to estimate the repeatability of gait analysis experiments, using Excel, Matlab and R. Please forgive this shameless self-publicity. The tools were linked to a publication that discussed the details of the calculations of the variance components.
                            I agree that StDev or variance should be used rather than the CMC and ICC to describe reliability of gait joint angle data. It is well recognised that both CMC and ICC are depended on joint ROM (Roislien etal. JBiomech, 2012). This is a limitation of these measures, and means that CMC and ICC values cannot be directly compared between different joints with varying ROM or the same joint across different activities with different ROM. In addition criterion values for limits of acceptable CMC or ICC need to be specific to the joint and ROM. Something that has not been addressed in the 3DMA reliability literature which has vague values in the range of 0.7- 0.8 as being acceptable and applied to all joints. These general CMC or ICC criterion values also have no relevance to gait joint angle data. Apart from this limitation the reason why CMC and ICC are not suited to describing joint data reliability is that they are upper bound by ‘1’, and with desired CMC and ICC values ranging from >0.98 for knee flex/ext and >0.84 for knee abd/add, this makes the CMC and ICC values for 3DMA insensitive to errors in CMC or ICC. To understand this, if you plot CMC or ICC against 95% CI for a large collection of varying sets of gait joint angle curves, as CMC and ICC values approach ‘1’ and the 95%CI of the sets of curves reduces (more similar in shape) and the slope of this relationship approaches zero. If you include a 95% confidence interval to the estimated CMC or ICC values then a set of gait join tangle curves could be anywhere from near identical to unrelated on the flat part of this cure where joint angle data operate. Therefore in describing reliability of gait joint angles the CMC and ICC are meaningless (particularly for knee and hip flex-ext).

                            It is common for the ICC and CMC to be calculated from the SS and df’s terms of the ANOVA table ( Shrout and Fleiss, 1979; Moore & McCabe, 1999). There are worked examples in the excel spread sheet describing normal gait joint angle data that I previously presented on BiomechL. As you point out Kadaba et.al. (1989) used an adjusted CMC (CMCa) by dividing the numerator (SSE) and denominator (SST) by their respective df’s (N.T-T and N.T-1). This is not the traditional CMC and reduces CMC due to the ratio SSE/SST being scaled by the factor (NT-1)/(NT.T). In gait analysis the number of data points representing a gait cycle is usually 100 (T = 100) and session, subject or tester repeats is relatively small (N = 2 to 4). Such that the factor can be approximated to (T/T-1) and the effect can be seen of penalize your estimated CMC relative to the number of repeats. For example if you have a CMC of 0.90 then SSE/SST = 0.19 and with T =2 (test-retest) then CMCa = 0.79. The adjusted CMCa also has the limitation that if CMC falls below 0.707 and SSE/SST goes above 0.5 then for T=2 CMCa becomes indeterminate. Remembering that you are trying to estimate a population or true CMC based on a small sample, so the more repeats you include (sessions, subject), the stronger the study, the smaller the confidence interval and the better is your estimate of the true CMC. Another limitation of adjusted CMCa is that it is directly dependent on the number of repeats used, such that the CMCa cannot be compared between studies using different numbers of subjects or sessions. I have been more critical of the adjusted CMCa, stating the CMCa is no longer a correlation coefficient, should not be presented as one, does not represent the true or population CMC, and should not be used as a reliability outcome measure in gait reliability. Unfortunately the use of the adjusted CMCa is now common in gait reliability literature. Also the ‘adjusted’ part of CMCa (Kadaba et.al., 1989) has been left off the description and forgotten about within the 3DMA literature and has simply been called the CMC, which it is not. It is not readily apparent but in the comparative reliability data I present recently on Biomec hL, I ‘unadjusted’ the CMCa values presented based on the study design to produce an estimated CMC value, this corrected value could then be compared between studies and subject or session variability (or pooled stdev) estimated.

                            Cheers
                            Allan Carman

                            Comment


                            • #15
                              Re: 3DMA reliability

                              Originally posted by Allan Carman View Post
                              I agree that StDev or variance should be used rather than the CMC and ICC to describe reliability of gait joint angle data.
                              Thanks, I think most of us agree, yet a LOT of studies dealing with gait/motion capture reliability provide only ICC, CMC, etc. results without also discussing or supplying the variance components. As stated by the statistical reviewer in the comments to our article:
                              Originally posted by G&P reviewer
                              To me, the critical point is that knowledge of the variance components allows precision of the estimation of change to be determined under different scenarios for measuring the change. That can be placed in the clinical context of what constitutes a clinically significant change. This is in stark contrast to measures such as the ICC which bears no relationship at all to clinical significance. The oft-quoted ranges of values for the ICC that can be considered as excellent etc. are, in my opinion, worse than useless as they can mislead on the reproducibility of a new assessment tool. Indeed, I have reviewed papers in which an 'excellent' ICC was obtained for an assessment tool that was clinically insufficiently accurate to be useful.
                              In this context, it might be seen counter-productive that we discussed how to obtain ICC and CMC from the variance components.

                              Our strategy is in fact to make sure the variance components are calculated first and foremost, i.e. they would appear in the results at least as well, rather than ICC/CMC/etc. being chosen instead of the variance components.

                              Comment

                              Working...
                              X