Announcement

Collapse
No announcement yet.

Cutoff frequency for doing inverse dynamic

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Cutoff frequency for doing inverse dynamic

    Hi all,
    I'm wondering what cutoff frequency I should use to filter the marker coordinate and GRF data to do inverse dynamic (calculate knee moment) for a drop-landing movement. Coordinate and GRF was sampled at 120 and 2040 Hz, respectively. I guess the typical cutoff may be 15 and 50 Hz. But I read a few articles (Kristianslund et al. 2012; Bisseling & Hof, 2006) that suggested you should use the same cutoff frequency (15, 15) for both coordinate and GRF to minimize the impact artifact on the knee moment. Any suggestions? Thanks a lot!
    Yumeng

  • #2
    Re: Cutoff frequency for doing inverse dynamic

    Hi Yumen,

    I previously asked the same question and got some fantastic replies and references. See (https://biomch-l.isbweb.org/forum/bi...t-is-best-prac).

    Bernard
    Last edited by Ton van den Bogert; January 8, 2024, 05:06 PM. Reason: fixed the link

    Comment


    • #3
      Re: Cutoff frequency for doing inverse dynamic

      I would recommend not filtering 3D coordinate data at all and only filter force plate data to remove high frequency noise. Instead use a least squares cluster design (6 or more markers per segment) and filter the calculated joint angle data, resultant joint forces and moments. The least squares approach does act as a low pass filter for high frequency noise but more importantly the RMS error calculated between local and global coordinates of segment markers can be used to screen, identify and remove or correct errors in marker coordinates. Including marker reconstruction, tracking and identification errors. These errors cannot be addressed if the raw 3D coordinate data has been smoothed.
      If the 3D coordinate data were to be filtered by a low pass filter, reflecting the maximum frequency of interest, then it is not correct to filter the force data at the same frequency (as has been suggested both at 15Hz). For a simple periodic arm movement the frequency of changes in velocity will be twice and acceleration four times the frequency of the position data. Therefore if acceleration data were to be filtered it should be filtered at a frequency at least four times that used for the position data. Similarly, force plate data should also be filtered at a frequency at least four times the filter used for the position data, if filtering were to be used. Foot impacts of running or jumping are at far higher frequencies and the limitation is the maximum frequency of the 3D video system (200-250Hz). While collection of force data is not usually an issue and is routinely collected at 1,000 Hz regardless of the movement but can be pushed higher for impacts as you have done. For impacts filtering either position or force data with a 15Hz low pass is far too low and would not be appropriate.

      Comment


      • #4
        Re: Cutoff frequency for doing inverse dynamic

        Originally posted by Allan Carman View Post
        I would recommend not filtering 3D coordinate data at all and only filter force plate data to remove high frequency noise. Instead use a least squares cluster design (6 or more markers per segment) and filter the calculated joint angle data, resultant joint forces and moments. The least squares approach does act as a low pass filter for high frequency noise but more importantly the RMS error calculated between local and global coordinates of segment markers can be used to screen, identify and remove or correct errors in marker coordinates. Including marker reconstruction, tracking and identification errors. These errors cannot be addressed if the raw 3D coordinate data has been smoothed.
        This is good advice, but I have a few comments. With 6-DOF joints (i.e. no joints in the model), 3 markers per segment (not 6) is sufficient. With fewer degrees of freedom, you can get away with even fewer markers by doing least-squares analysis on a full body model (as in Opensim's inverse kinematics). More markers and fewer degrees of freedom will produce more noise reduction via least squares. 6 markers may be appropriate for certain applications, but this would be hard to do for a full body motion capture. On the other end of the spectrum, if we only need to track a single segment, we might be able to use considerably more than 6 markers.

        The least-squares approach reduces error, but it is not correct to say that it acts as a low-pass filter. Marker coordinate error is reduced equally for high and low frequencies. That's actually better than a low-pass filter, which does not remove low-frequency error.

        Screening for tracking errors, by looking for outliers in the model residuals, is indeed an important capability that comes from using least-squares, and as Allen said, it must be done before filtering.

        I am starting to warm up to the idea of not filtering raw data, but as Allen suggests, do the entire kinematic/dynamic analysis with noisy data, and then filter the final results. In the old days, we would look at intermediate results (e.g. accelerations) and if they looked terribly noisy, we would not like it. But noisy acceleration is not a problem, if in the equations it gets mulitiplied by a small mass and/or the final force or moment result is still going to be filtered before it is used.

        If the 3D coordinate data were to be filtered by a low pass filter, reflecting the maximum frequency of interest, then it is not correct to filter the force data at the same frequency (as has been suggested both at 15Hz). For a simple periodic arm movement the frequency of changes in velocity will be twice and acceleration four times the frequency of the position data. Therefore if acceleration data were to be filtered it should be filtered at a frequency at least four times that used for the position data. Similarly, force plate data should also be filtered at a frequency at least four times the filter used for the position data, if filtering were to be used. Foot impacts of running or jumping are at far higher frequencies and the limitation is the maximum frequency of the 3D video system (200-250Hz). While collection of force data is not usually an issue and is routinely collected at 1,000 Hz regardless of the movement but can be pushed higher for impacts as you have done. For impacts filtering either position or force data with a 15Hz low pass is far too low and would not be appropriate.
        Differentiation does not change the frequency of a signal, but it will amplify the higher frequencies by a factor proportional to the frequency squared (in the case of acceleration). So the spectrum will have more high frequency content, but it's not correct to say that the cutoff frequency should be a factor 4 higher. The difference in cutoff frequency between the optimal acceleration filter and the optimal position filter is much smaller. So small, in fact, that we usually don't bother with different filters for position and acceleration. Hatze's 1981 paper is an exception (http://www.ncbi.nlm.nih.gov/pubmed/7217111).

        There are now several papers about inverse dynamics of impact (some are cited earlier in this thread) and these show that force and motion data must generally be filtered with the same filter, when doing inverse dynamic analysis. If motion data is filtered at 15 Hz (otherwise accelerations would be too noisy), this removes the inertial forces above 15 Hz. This means that all other forces above 15 Hz (including force plate data) must also be removed, to avoid artifacts. If you don't filter the raw data at all, but filter the final result at 15 Hz, as Allen suggests, this is practically equivalent and you would get almost the same result.

        My 1996 simulation study (http://isbweb.org/data/invdyn/csb96.pdf) showed that for some variables (e.g. ankle reaction force) it is better to filter force data at a higher frequency (56 Hz) than motion data, which reflects the fact that the inertial forces are much smaller in the foot. So there is some potential to do better than simply using one filter for everything, but this will require doing the filtering on raw data (not on the final result), and these various filter settings will be hard to optimize, unless you already know the correct forces and moments (as in my simulation study).

        Ton van den Bogert

        Comment


        • #5
          Re: Cutoff frequency for doing inverse dynamic

          Thank you, all of you, for the very helpful suggestions.
          Yumeng

          Comment


          • #6
            Re: Cutoff frequency for doing inverse dynamic

            Ton's reply highlights some fundamental difference that exist in approaches and opinion on 3D motion analysis methods, as you read on you will see just how different they can be.

            Originally posted by bogert View Post
            This is good advice, but I have a few comments. With 6-DOF joints (i.e. no joints in the model), 3 markers per segment (not 6) is sufficient. With fewer degrees of freedom, you can get away with even fewer markers by doing least-squares analysis on a full body model (as in Opensim's inverse kinematics). More markers and fewer degrees of freedom will produce more noise reduction via least squares. 6 markers may be appropriate for certain applications, but this would be hard to do for a full body motion capture. On the other end of the spectrum, if we only need to track a single segment, we might be able to use considerably more than 6 markers.
            I have always used a marker cluster, six degree of freedom, in least squares design that includes virtual joint centres. Ranging from a 7 segment lower body clinical gait (38 markers) to full body (82 markers) which has including gymnastics, rowing, soccer penalty kick, throwing and cricket fast bowling. This has not proved too difficult with a 12 camera Motion Analysis Corporation system.I do not agree with constraints being placed on joints (zero displacement or zero rotation for certain degrees of freedom) to enable a global optimization approach or to reduce the number of markers placed on the subject to three or even less. To me the joint constraints are unrealistic given the uncertainties in joint centre locations, alignment of axes with anatomical rotations and influence of skin movement artefact. Large distortions in 3D segment locations can result in order to maintain the constraints to produce meaningless results. With modern 3D systems limiting the number of segment marker to three per segment at the expense of accuracy and reliability is unnecessary. While the joint constrained approach is used in animation it is not suitable for biomechanics. It is my view that the constrained approach with minimal markers is a step backwards to the original gait marker sets that also sought to minimize the total number of markers due to limitations in 3D systems at the time. These were unreliable and very poor at describing non-sagital plane joint rotations in gait, especially at the knee, the reasons for this are for another discussion.

            Originally posted by bogert View Post
            There are now several papers about inverse dynamics of impact (some are cited earlier in this thread) and these show that force and motion data must generally be filtered with the same filter, when doing inverse dynamic analysis. If motion data is filtered at 15 Hz (otherwise accelerations would be too noisy), this removes the inertial forces above 15 Hz. This means that all other forces above 15 Hz (including force plate data) must also be removed, to avoid artifacts. If you don't filter the raw data at all, but filter the final result at 15 Hz, as Allen suggests, this is practically equivalent and you would get almost the same result.

            My 1996 simulation study (http://isbweb.org/data/invdyn/csb96.pdf) showed that for some variables (e.g. ankle reaction force) it is better to filter force data at a higher frequency (56 Hz) than motion data, which reflects the fact that the inertial forces are much smaller in the foot...
            A quick read of this later study and I believe the methods and conclusions are flawed.
            This study filtered the positional data with a low pass filter that was far too low for the frequencies of movement during the impact phase. This will introduce distortions to the (x,y) position data to fit an inappropriately small maximum frequency allowed by the filter. Thus creating erroneous position data near the point of impact (possibly with retardation, overshoot and low frequency oscillations introduced?). To which good force data was applied to produce, as expected, poor joint moment data. An attempt is then made through optimization to distort the force data through more inappropriate filtering (also with introduced distortions and reducyion of impact peaks) to recreate the desired joint moment data. The result is that neither position, force nor moment data presented post filtering are correct. This is like trying to get two wrongs to make a right. The addition of the 0.5mm random noise to the position data was irrelevant to the study. As the study stated at the outset, with no filtering of the position and force data the expected joint moment data were obtained. I feel the desire was to filter the added noise but the effect of the filtering was something entirely different. Instead of demonstrating why you should filter the position data and force data at similar low frequencies the study showed why you should not when high frequency impacts are involved.

            In an ideal simple cyclic movement with no noise or impact forces the frequency of the acceleration data will be twice that of the position or joint angle data (I said four times in my original reply). The point being that if you filter the sampled accelerometer or force plate data at a frequency based on the position data you risk filtering out the acceleration or force that you are interested in. If there are impacts then filtering either position or force at such low frequencies (15Hz) is not appropriate and you may be better off collect both 3D marker and force plate data at 100Hz and ignore the impact phase altogether.

            Allan Carman

            Comment


            • #7
              Re: Cutoff frequency for doing inverse dynamic

              This is a very good discussion to have.

              To start with Allen's last point: differentiation does not alter frequency. A simple example: x = Asin(wt). Acceleration: a = -w^2*A*sin(wt). It's still a sine wave with the same frequency, just inverted and with a new amplitude (and new units).

              There indeed a philosophical divide between 6-DOF and global optimization. I used to be firmly in the 6-DOF camp. Back in 1996, I developed a global optimization solver for Motion Analysis Corp., which is now known as as Calcium Solver. This was motivated by the needs of the animation industry, because they wanted joints to not come apart (not even slightly). When Motion Analysis asked me if this could be useful in biomechanics, I said no way, I want to know all 6 DOF in a joint, and then decide which ones to look at.

              But as I got more into movement simulation and tracking of human motion data with simulations, it became important to generate the motion data (joint angles) with the same model that was used for forward dynamics. In forward dynamics, you need joints, 6-DOF does not work. So for those applications I started using global optimization (also known as least-squares inverse kinematics). In Opensim this is now the standard pipeline. I still say that 6-DOF must be used for orthopedic applications. But even there, only if the actual joint motion (e.g. anterior drawer in the knee) is larger than the measurement error. This is the case with bone markers, maybe only just so for skin markers. For muscle function and motor control questions, I definitely favor the global optimization method. This also makes sure that the mechanical energetics is correct. If you do 6-DOF analysis, and only report joint power associated with rotations, and add them up, you may not be able to show conservation of energy. Also it is nice that all DOF can be controlled by muscles.

              On the filtering, as I said, I am warming up to the idea of not filtering the input, but just do the whole processing and then determine what (if anything) is needed to clean up the output. Marker data is not as noisy anymore these days, because of high-resolution cameras. I have done inverse dynamic analysis of arm movements with no filtering at all and the joint moments looked fine. I do have some concern that the kinematic analysis is a nonlinear process, so it alters the frequency content. Not because of differentiation but because rigid body kinematics involves nonlinear sin/cos functions. White noise in marker data can appear with a different spectrum in joint angles or joint moments. But that may not be a problem in practice.

              It is great that it is possible to track 82 markers because that gives options and good quality. It allows a good 6-DOF analysis, but a protocol with many markers also helps global optimizatiom. If you track 82 markers with a 33-DOF full body model (typical for Opensim gait models), you are estimating 33 variables from 246 measurements (XYZ of 82 markers). That is a very good ratio and the noise reduction would be quite significant. It is extremely important to look at the residuals of the least-squares analysis. Anything over 3 mm RMS indicates that your model has trouble following the markers. Opensim's model scaling tool is not great, I tend to do this my own way and residuals are generally below 3 mm for a lower extremity analysis.

              Here is perhaps an idea to determine whether 6-DOF is needed. Do the kinematic analysis with 6-DOF and 33 DOF. The 6-DOF is guaranteed to have lower residuals, but does that mean the 33-DOF model is worse? This is a common problem in model fitting, and there are some tools to determine when additional model complexity is justified by the improved fit. One of those is the Bayesian Information Criterion (BIC, https://en.wikipedia.org/wiki/Bayesi...tion_criterion).

              BIC = n*log(MSE) + k*log(n), where MSE is the mean-squared error, n is the number of measurements, k is the number of parameters being estimated from the data, and log is natural logarithm.

              Units of measurement in the MSE will change BIC by a constant, so we should not look at the BIC itself, only at differences in BIC between two models.

              Let's say we have a 6-DOF model with 14 body segments (14*6 = 84 parameters) which gives 2 mm RMS fit error. And we have a 33-DOF model (with 33 parameters) which gives 3 mm RMS fit error. Both use the same data from 82 markers (246 measurements).

              BIC for the 6-DOF model (MSE = 4 mm^2) is 246*log(4) + 84*log(246) = 803.4763

              BIC for the global optimization model (MSE = 9 mm^2) is 246*log(9) + 33*log(246) = 722.1932

              According the the criteria stated on the Wikipedia page, this would be "very strong" justification for using the global optimization model, and the 6-DOF model is considered to be overfitting.

              Consider the scenario where the 6-DOF model has a RMS fit error of only 1 mm. BIC would now be 246*log(1) + 84*log(246) = 462.4478

              Now the 6-DOF model has "very strong" statistical justification.

              Those who use global optimization usually do not have enough markers to do the 6-DOF analysis. With your 82 markers you can actually do it both ways, determine the residuals, and compare the BIC.

              The BIC may only be valid for uncorrelated errors etc., which we don't have (skin motion artifact) but maybe it's a good start to bring some statistical rigor into this debate.

              Ton

              Comment


              • #8
                Re: Cutoff frequency for doing inverse dynamic

                IK/global optimization is attractive to me because I conceptually like the idea of not assuming the noise/error in the data exists in a certain place in the frequency domain. IK removes that assumption but then has other assumptions (e.g. the need to define joint models as Allan mentioned), but those could also be data-driven, at least in some cases (e.g. functional joints). Which method is best I think will always depend on the question and on the data available.

                The recent movement towards smoothing GRF at low cutoffs for inverse dynamics modeling raises an important question (I think it's important, anyway) of what if my GRF parameters are also outcome variables? I've greatly distorted them (assuming the raw GRF measurements are reasonably accurate and have real high-frequency content) and if I instead analyze the raw GRF or GRF smoothed at a higher cutoff, then they are not the GRF that produced the joint moments on my model. Or more generally, even if the GRF are not outcome variables, the GRF I'm using to calculate joint moments do not resemble the GRF I measured.

                I like the idea of doing modeling with the raw data then filtering the output.

                Ross
                Last edited by Ross Miller; February 6, 2016, 11:27 AM.

                Comment


                • #9
                  Re: Cutoff frequency for doing inverse dynamic

                  Ross, you probably did not mean to say that IK removes the need for filtering. It reduces the need for filtering (just like 6-DOF models do, but more so). Whether or not filtering is still needed depends on how much noise there was on the marker data in the first place. If filtering is still needed, it can be done with a higher cutoff frequency because the noise has already been reduced.

                  The need for filtering is truly removed when you track motion data with a musculoskeletal dynamics optimal control model, but that's a different topic. We're strictly talking about kinematic models here.

                  If GRF variables are needed as outcome variables, I would not filter them.

                  Ton

                  Comment


                  • #10
                    Re: Cutoff frequency for doing inverse dynamic

                    Originally posted by bogert View Post
                    Ross, you probably did not mean to say that IK removes the need for filtering. It reduces the need for filtering (just like 6-DOF models do, but more so).
                    With IK could you run it on raw marker data, then filter the angles? That was what I was trying to say.

                    Or do IK on the raw marker data then average the angles over trials (as long as you don't care about data from the individual trials).

                    I guess you could do all this with 6-DoF as well.

                    Ross
                    Last edited by Ross Miller; February 6, 2016, 04:33 PM.

                    Comment


                    • #11
                      Re: Cutoff frequency for doing inverse dynamic

                      OK, thanks for the clarification!

                      Comment


                      • #12
                        Re: Cutoff frequency for doing inverse dynamic

                        Dear Allan, Ton, Ross,

                        this as been a really good learning point for me. Is there any other papers out there ( I only know of one "On the filtering of intersegmental loads during running"), which filtered outputs (kinematics, moments, power, GRF etc) rather than filtering raw inputs?

                        It does get tricky/ confusing (maybe for a reader or a reviewer) when a paper is touching on multiple outcome variables (e.g. kinematics, GRF, power), and each output clearly needs to be driven by a specific filtering routine. Definitely would be easier to filter outputs, hence, would be useful to have a few precedence for such a use.

                        Kind regards,
                        Bernard

                        Comment


                        • #13
                          Re: Cutoff frequency for doing inverse dynamic

                          Bernard,

                          That is a good paper to cite. I don't know of any other studies that did it this way. Maybe Allen knows.

                          In the old days, filtering was sometimes done at the end, though not because of a better understanding than we now have. Mainly because the moments looked ugly until they were filtered. This 1990 paper by Paul Devita on running did spline smoothing on the film data, to get good derivatives. No smoothing on ground reaction forces. Joint moments were then calculated. I suspect there were impact artifacts that did not look trustworthy, so the moments were then filtered with a 10 Hz low pass Butterworth filter. In the end, the result was probably pretty good, even though the methods were not elegant. I suspect that the entire spline smoothing process was unnecessary (for the joint moment results, at least). In those days, all steps of the processing were visible so it must have been scary to skip the initial smoothing. Without a filter at the very beginning, there was no way to judge the derivatives because they would be mostly noise and the signal was invisible. But, these are only intermediate results. The final filter at the end wiould have revealed the signal again.

                          Ton

                          Comment


                          • #14
                            Re: Cutoff frequency for doing inverse dynamic

                            I think a critical question is what is the purpose of your inverse dynamics analysis? There is no agreed upon processing technique for handling the data: Smooth before, smooth after, don't smooth at all. What Hz? The difference in handling the data obviously underlies the differences in presented/published discrete values and time-data curves of joint torques and powers. Dr. David Winter, an early proponent of using joint energetics and kinetics proposed that it was the shape of the curves that were important as a reflection of the NMS control. He argued against analyzing discrete values (mins, maxs) taken from a curve, using an analogy to EKG interpretation. Comparing changes in times of zero crossings, integrals of the curves, and relationships between the hip, knee and ankle was how Dr. Winter used the curves starting back in the early 1970s. Even today, there are very few papers validating calculated moments and powers to in vivo values. Much "validation" is based on an intra-ocular technique: Those look good to me, and if they don't I'll just smooth them again, somewhere in the steps of processing, until I can interpret them. I'd argue that the answer to how to process the data is up to you. The important aspect is that you use the same processing for each trial of each condition for each subject--then compare the different curves. Don't expect your results to match others who analyze their data differently. Just my two cents worth.

                            Comment


                            • #15
                              Re: Cutoff frequency for doing inverse dynamic

                              Originally posted by smccaw97 View Post
                              Even today, there are very few papers validating calculated moments and powers to in vivo values.
                              I think this is a really important point.

                              Personally I’ve been slow to get on board the “everything should be filtered at the same cutoff” train because I’ve not seen a convincing theoretical argument for why it necessarily produces more accurate joint moments as a general rule, and I don’t like the idea of inputting GRF to my model that are greatly distorted compared to the GRF I measured.

                              What I think the studies to date on this topic suggest is that we need to work on improving the realisticness (?) of the models we use for inverse dynamics and the accuracy of the kinematic data we input to these calculations, assuming the GRF are reasonably accurate. Bisseling and Hof (2006) tried this, with the segment accelerations informed by accelerometer measurements, but it didn’t work well if I remember right.

                              We also need more studies like (http://isbweb.org/data/invdyn/csb96.pdf) comparing the accuracy of these different filtering/modeling/processing approaches when the “real” joint moments are known.

                              Ross

                              Comment

                              Working...
                              X