Re: Cutoff frequency for doing inverse dynamic
This may be getting more off topic than the original post, but here is a little more than my two cents worth on methods and filtering :-)
“Don't expect your results to match others who analyze their data differently”
We should expect our results to resemble others who are studying a similar population and activity, especially joint angle data in normal gait. It is not good enough to except that inter-session and inter-examiner results will be unrelated, even on the same participant, for something as reproducible as gait. The 3DMA literature does show no similarity in what has been presented as normal non-sagital knee joint rotations during gait, this includes the poor results of 3D gait reliability studies. However, this reflects poor methods and understanding of 3DMA, rather than the variability of gait or what we should be expecting from 3DMA. If the quality of data (filtering and monitoring RMS errors in least squares are part of this) and axes misalignment were addressed in a more thorough and informed manner then you will find that in gait the non-sagital rotations are more reliable that sagital plane rotations and knee abd/add is one of the most reproducible lower limb joint rotations across individuals. So much so that knee abd/add can be used as a guide to the alignment of lower limb axes. Making the methods largely independent of marker placement and eliminating time consuming and ineffective practices such as knee alignment devices, wands and defining functional axes through a series of hip-leg waving movements.
“The difference in handling the data obviously underlies the differences in presented/published discrete values and time-data curves of joint torques and powers.”
“I think the studies to date on this topic suggest is that we need to work on improving the realisticness (?) of the models we use for inverse dynamics and the accuracy of the kinematic data”
It is more than just differences in handling of data, but as mentioned the reliability of the underling methods (models) we use and ability to get repeatable results. No matter what your outcomes measures you need valid and reliable data describing the position and orientation of segment axes. From which joint rotation curves and discrete variables are derived or moment and powers are expressed relative to. Even if the activity does not involve gait or even joint kinematics I would suggest you should collect normal gait as part of the model validation procedure. In gait knee abd/add range of motion is small, patterns are highly reproducible, and are very sensitive to axis misalignment. This can be used to verify axes alignment prior to analysis of other activities (jumps, side-steps etc.) or calculation of joint moments and powers. If your derived joint angle data is erroneous, reflecting large random errors is axes alignment and significant cross-talk, then it is not OK to carry on and expect that your joint moment data will be fine.
“I'd argue that the answer to how to process the data is up to you…”
Yes and No. Sound procedures need to be followed, filtering is only part of it. There are some key steps that I feel need to be done regardless of how you filter the data. So I would say understand it, design it, test it and when you are happy with the reliability stick to it. It is worth noting that the method and the implementation (how the method is interpreted and applied by the examiner) are two different aspects to reliability. No current method appearing in the 3DMA literature is reliable, however some implementations show reliability for some joint angles in gait but this is the exception in published reliability studies.
Bernard asked is there is a reference or precedence for filtering outcome measures post analysis. As you may have guested I don’t follow the norm or expected and don’t know of any references I could add to support not filtering raw coordinate prior to the least squares reconstruction of axes. However, I did present the rationale, methods and results of a reliability study of my approach to 3DMA in a seminar (see YouTube, search under SPRINZ seminars). As mentioned by several of the contributors to the discussion so far, a far more critical look is needed into 3DMA methods and validity and reliability of outcome measures than has been done in the past.
Announcement
Collapse
No announcement yet.
Cutoff frequency for doing inverse dynamic
Collapse
X
-
Re: Cutoff frequency for doing inverse dynamic
This paper might be interesting:
Schreven S, Beek PJ, Smeets JBJ (2015) Optimising filtering parameters for a 3d motion analysis system. Journal of Electromyography and Kinesiology 25: 808-814
Leave a comment:
-
Re: Cutoff frequency for doing inverse dynamic
Originally posted by smccaw97 View PostEven today, there are very few papers validating calculated moments and powers to in vivo values.
Personally I’ve been slow to get on board the “everything should be filtered at the same cutoff” train because I’ve not seen a convincing theoretical argument for why it necessarily produces more accurate joint moments as a general rule, and I don’t like the idea of inputting GRF to my model that are greatly distorted compared to the GRF I measured.
What I think the studies to date on this topic suggest is that we need to work on improving the realisticness (?) of the models we use for inverse dynamics and the accuracy of the kinematic data we input to these calculations, assuming the GRF are reasonably accurate. Bisseling and Hof (2006) tried this, with the segment accelerations informed by accelerometer measurements, but it didn’t work well if I remember right.
We also need more studies like (http://isbweb.org/data/invdyn/csb96.pdf) comparing the accuracy of these different filtering/modeling/processing approaches when the “real” joint moments are known.
Ross
Leave a comment:
-
Re: Cutoff frequency for doing inverse dynamic
I think a critical question is what is the purpose of your inverse dynamics analysis? There is no agreed upon processing technique for handling the data: Smooth before, smooth after, don't smooth at all. What Hz? The difference in handling the data obviously underlies the differences in presented/published discrete values and time-data curves of joint torques and powers. Dr. David Winter, an early proponent of using joint energetics and kinetics proposed that it was the shape of the curves that were important as a reflection of the NMS control. He argued against analyzing discrete values (mins, maxs) taken from a curve, using an analogy to EKG interpretation. Comparing changes in times of zero crossings, integrals of the curves, and relationships between the hip, knee and ankle was how Dr. Winter used the curves starting back in the early 1970s. Even today, there are very few papers validating calculated moments and powers to in vivo values. Much "validation" is based on an intra-ocular technique: Those look good to me, and if they don't I'll just smooth them again, somewhere in the steps of processing, until I can interpret them. I'd argue that the answer to how to process the data is up to you. The important aspect is that you use the same processing for each trial of each condition for each subject--then compare the different curves. Don't expect your results to match others who analyze their data differently. Just my two cents worth.
Leave a comment:
-
Re: Cutoff frequency for doing inverse dynamic
Bernard,
That is a good paper to cite. I don't know of any other studies that did it this way. Maybe Allen knows.
In the old days, filtering was sometimes done at the end, though not because of a better understanding than we now have. Mainly because the moments looked ugly until they were filtered. This 1990 paper by Paul Devita on running did spline smoothing on the film data, to get good derivatives. No smoothing on ground reaction forces. Joint moments were then calculated. I suspect there were impact artifacts that did not look trustworthy, so the moments were then filtered with a 10 Hz low pass Butterworth filter. In the end, the result was probably pretty good, even though the methods were not elegant. I suspect that the entire spline smoothing process was unnecessary (for the joint moment results, at least). In those days, all steps of the processing were visible so it must have been scary to skip the initial smoothing. Without a filter at the very beginning, there was no way to judge the derivatives because they would be mostly noise and the signal was invisible. But, these are only intermediate results. The final filter at the end wiould have revealed the signal again.
Ton
Leave a comment:
-
Re: Cutoff frequency for doing inverse dynamic
Dear Allan, Ton, Ross,
this as been a really good learning point for me. Is there any other papers out there ( I only know of one "On the filtering of intersegmental loads during running"), which filtered outputs (kinematics, moments, power, GRF etc) rather than filtering raw inputs?
It does get tricky/ confusing (maybe for a reader or a reviewer) when a paper is touching on multiple outcome variables (e.g. kinematics, GRF, power), and each output clearly needs to be driven by a specific filtering routine. Definitely would be easier to filter outputs, hence, would be useful to have a few precedence for such a use.
Kind regards,
Bernard
Leave a comment:
-
Re: Cutoff frequency for doing inverse dynamic
OK, thanks for the clarification!
Leave a comment:
-
Re: Cutoff frequency for doing inverse dynamic
Originally posted by bogert View PostRoss, you probably did not mean to say that IK removes the need for filtering. It reduces the need for filtering (just like 6-DOF models do, but more so).
Or do IK on the raw marker data then average the angles over trials (as long as you don't care about data from the individual trials).
I guess you could do all this with 6-DoF as well.
RossLast edited by Ross Miller; February 6, 2016, 04:33 PM.
Leave a comment:
-
Re: Cutoff frequency for doing inverse dynamic
Ross, you probably did not mean to say that IK removes the need for filtering. It reduces the need for filtering (just like 6-DOF models do, but more so). Whether or not filtering is still needed depends on how much noise there was on the marker data in the first place. If filtering is still needed, it can be done with a higher cutoff frequency because the noise has already been reduced.
The need for filtering is truly removed when you track motion data with a musculoskeletal dynamics optimal control model, but that's a different topic. We're strictly talking about kinematic models here.
If GRF variables are needed as outcome variables, I would not filter them.
Ton
Leave a comment:
-
Re: Cutoff frequency for doing inverse dynamic
IK/global optimization is attractive to me because I conceptually like the idea of not assuming the noise/error in the data exists in a certain place in the frequency domain. IK removes that assumption but then has other assumptions (e.g. the need to define joint models as Allan mentioned), but those could also be data-driven, at least in some cases (e.g. functional joints). Which method is best I think will always depend on the question and on the data available.
The recent movement towards smoothing GRF at low cutoffs for inverse dynamics modeling raises an important question (I think it's important, anyway) of what if my GRF parameters are also outcome variables? I've greatly distorted them (assuming the raw GRF measurements are reasonably accurate and have real high-frequency content) and if I instead analyze the raw GRF or GRF smoothed at a higher cutoff, then they are not the GRF that produced the joint moments on my model. Or more generally, even if the GRF are not outcome variables, the GRF I'm using to calculate joint moments do not resemble the GRF I measured.
I like the idea of doing modeling with the raw data then filtering the output.
RossLast edited by Ross Miller; February 6, 2016, 11:27 AM.
Leave a comment:
-
Re: Cutoff frequency for doing inverse dynamic
This is a very good discussion to have.
To start with Allen's last point: differentiation does not alter frequency. A simple example: x = Asin(wt). Acceleration: a = -w^2*A*sin(wt). It's still a sine wave with the same frequency, just inverted and with a new amplitude (and new units).
There indeed a philosophical divide between 6-DOF and global optimization. I used to be firmly in the 6-DOF camp. Back in 1996, I developed a global optimization solver for Motion Analysis Corp., which is now known as as Calcium Solver. This was motivated by the needs of the animation industry, because they wanted joints to not come apart (not even slightly). When Motion Analysis asked me if this could be useful in biomechanics, I said no way, I want to know all 6 DOF in a joint, and then decide which ones to look at.
But as I got more into movement simulation and tracking of human motion data with simulations, it became important to generate the motion data (joint angles) with the same model that was used for forward dynamics. In forward dynamics, you need joints, 6-DOF does not work. So for those applications I started using global optimization (also known as least-squares inverse kinematics). In Opensim this is now the standard pipeline. I still say that 6-DOF must be used for orthopedic applications. But even there, only if the actual joint motion (e.g. anterior drawer in the knee) is larger than the measurement error. This is the case with bone markers, maybe only just so for skin markers. For muscle function and motor control questions, I definitely favor the global optimization method. This also makes sure that the mechanical energetics is correct. If you do 6-DOF analysis, and only report joint power associated with rotations, and add them up, you may not be able to show conservation of energy. Also it is nice that all DOF can be controlled by muscles.
On the filtering, as I said, I am warming up to the idea of not filtering the input, but just do the whole processing and then determine what (if anything) is needed to clean up the output. Marker data is not as noisy anymore these days, because of high-resolution cameras. I have done inverse dynamic analysis of arm movements with no filtering at all and the joint moments looked fine. I do have some concern that the kinematic analysis is a nonlinear process, so it alters the frequency content. Not because of differentiation but because rigid body kinematics involves nonlinear sin/cos functions. White noise in marker data can appear with a different spectrum in joint angles or joint moments. But that may not be a problem in practice.
It is great that it is possible to track 82 markers because that gives options and good quality. It allows a good 6-DOF analysis, but a protocol with many markers also helps global optimizatiom. If you track 82 markers with a 33-DOF full body model (typical for Opensim gait models), you are estimating 33 variables from 246 measurements (XYZ of 82 markers). That is a very good ratio and the noise reduction would be quite significant. It is extremely important to look at the residuals of the least-squares analysis. Anything over 3 mm RMS indicates that your model has trouble following the markers. Opensim's model scaling tool is not great, I tend to do this my own way and residuals are generally below 3 mm for a lower extremity analysis.
Here is perhaps an idea to determine whether 6-DOF is needed. Do the kinematic analysis with 6-DOF and 33 DOF. The 6-DOF is guaranteed to have lower residuals, but does that mean the 33-DOF model is worse? This is a common problem in model fitting, and there are some tools to determine when additional model complexity is justified by the improved fit. One of those is the Bayesian Information Criterion (BIC, https://en.wikipedia.org/wiki/Bayesi...tion_criterion).
BIC = n*log(MSE) + k*log(n), where MSE is the mean-squared error, n is the number of measurements, k is the number of parameters being estimated from the data, and log is natural logarithm.
Units of measurement in the MSE will change BIC by a constant, so we should not look at the BIC itself, only at differences in BIC between two models.
Let's say we have a 6-DOF model with 14 body segments (14*6 = 84 parameters) which gives 2 mm RMS fit error. And we have a 33-DOF model (with 33 parameters) which gives 3 mm RMS fit error. Both use the same data from 82 markers (246 measurements).
BIC for the 6-DOF model (MSE = 4 mm^2) is 246*log(4) + 84*log(246) = 803.4763
BIC for the global optimization model (MSE = 9 mm^2) is 246*log(9) + 33*log(246) = 722.1932
According the the criteria stated on the Wikipedia page, this would be "very strong" justification for using the global optimization model, and the 6-DOF model is considered to be overfitting.
Consider the scenario where the 6-DOF model has a RMS fit error of only 1 mm. BIC would now be 246*log(1) + 84*log(246) = 462.4478
Now the 6-DOF model has "very strong" statistical justification.
Those who use global optimization usually do not have enough markers to do the 6-DOF analysis. With your 82 markers you can actually do it both ways, determine the residuals, and compare the BIC.
The BIC may only be valid for uncorrelated errors etc., which we don't have (skin motion artifact) but maybe it's a good start to bring some statistical rigor into this debate.
Ton
Leave a comment:
-
Re: Cutoff frequency for doing inverse dynamic
Ton's reply highlights some fundamental difference that exist in approaches and opinion on 3D motion analysis methods, as you read on you will see just how different they can be.
Originally posted by bogert View PostThis is good advice, but I have a few comments. With 6-DOF joints (i.e. no joints in the model), 3 markers per segment (not 6) is sufficient. With fewer degrees of freedom, you can get away with even fewer markers by doing least-squares analysis on a full body model (as in Opensim's inverse kinematics). More markers and fewer degrees of freedom will produce more noise reduction via least squares. 6 markers may be appropriate for certain applications, but this would be hard to do for a full body motion capture. On the other end of the spectrum, if we only need to track a single segment, we might be able to use considerably more than 6 markers.
Originally posted by bogert View PostThere are now several papers about inverse dynamics of impact (some are cited earlier in this thread) and these show that force and motion data must generally be filtered with the same filter, when doing inverse dynamic analysis. If motion data is filtered at 15 Hz (otherwise accelerations would be too noisy), this removes the inertial forces above 15 Hz. This means that all other forces above 15 Hz (including force plate data) must also be removed, to avoid artifacts. If you don't filter the raw data at all, but filter the final result at 15 Hz, as Allen suggests, this is practically equivalent and you would get almost the same result.
My 1996 simulation study (http://isbweb.org/data/invdyn/csb96.pdf) showed that for some variables (e.g. ankle reaction force) it is better to filter force data at a higher frequency (56 Hz) than motion data, which reflects the fact that the inertial forces are much smaller in the foot...
This study filtered the positional data with a low pass filter that was far too low for the frequencies of movement during the impact phase. This will introduce distortions to the (x,y) position data to fit an inappropriately small maximum frequency allowed by the filter. Thus creating erroneous position data near the point of impact (possibly with retardation, overshoot and low frequency oscillations introduced?). To which good force data was applied to produce, as expected, poor joint moment data. An attempt is then made through optimization to distort the force data through more inappropriate filtering (also with introduced distortions and reducyion of impact peaks) to recreate the desired joint moment data. The result is that neither position, force nor moment data presented post filtering are correct. This is like trying to get two wrongs to make a right. The addition of the 0.5mm random noise to the position data was irrelevant to the study. As the study stated at the outset, with no filtering of the position and force data the expected joint moment data were obtained. I feel the desire was to filter the added noise but the effect of the filtering was something entirely different. Instead of demonstrating why you should filter the position data and force data at similar low frequencies the study showed why you should not when high frequency impacts are involved.
In an ideal simple cyclic movement with no noise or impact forces the frequency of the acceleration data will be twice that of the position or joint angle data (I said four times in my original reply). The point being that if you filter the sampled accelerometer or force plate data at a frequency based on the position data you risk filtering out the acceleration or force that you are interested in. If there are impacts then filtering either position or force at such low frequencies (15Hz) is not appropriate and you may be better off collect both 3D marker and force plate data at 100Hz and ignore the impact phase altogether.
Allan Carman
Leave a comment:
-
Re: Cutoff frequency for doing inverse dynamic
Thank you, all of you, for the very helpful suggestions.
Yumeng
Leave a comment:
-
Re: Cutoff frequency for doing inverse dynamic
Originally posted by Allan Carman View PostI would recommend not filtering 3D coordinate data at all and only filter force plate data to remove high frequency noise. Instead use a least squares cluster design (6 or more markers per segment) and filter the calculated joint angle data, resultant joint forces and moments. The least squares approach does act as a low pass filter for high frequency noise but more importantly the RMS error calculated between local and global coordinates of segment markers can be used to screen, identify and remove or correct errors in marker coordinates. Including marker reconstruction, tracking and identification errors. These errors cannot be addressed if the raw 3D coordinate data has been smoothed.
The least-squares approach reduces error, but it is not correct to say that it acts as a low-pass filter. Marker coordinate error is reduced equally for high and low frequencies. That's actually better than a low-pass filter, which does not remove low-frequency error.
Screening for tracking errors, by looking for outliers in the model residuals, is indeed an important capability that comes from using least-squares, and as Allen said, it must be done before filtering.
I am starting to warm up to the idea of not filtering raw data, but as Allen suggests, do the entire kinematic/dynamic analysis with noisy data, and then filter the final results. In the old days, we would look at intermediate results (e.g. accelerations) and if they looked terribly noisy, we would not like it. But noisy acceleration is not a problem, if in the equations it gets mulitiplied by a small mass and/or the final force or moment result is still going to be filtered before it is used.
If the 3D coordinate data were to be filtered by a low pass filter, reflecting the maximum frequency of interest, then it is not correct to filter the force data at the same frequency (as has been suggested both at 15Hz). For a simple periodic arm movement the frequency of changes in velocity will be twice and acceleration four times the frequency of the position data. Therefore if acceleration data were to be filtered it should be filtered at a frequency at least four times that used for the position data. Similarly, force plate data should also be filtered at a frequency at least four times the filter used for the position data, if filtering were to be used. Foot impacts of running or jumping are at far higher frequencies and the limitation is the maximum frequency of the 3D video system (200-250Hz). While collection of force data is not usually an issue and is routinely collected at 1,000 Hz regardless of the movement but can be pushed higher for impacts as you have done. For impacts filtering either position or force data with a 15Hz low pass is far too low and would not be appropriate.
There are now several papers about inverse dynamics of impact (some are cited earlier in this thread) and these show that force and motion data must generally be filtered with the same filter, when doing inverse dynamic analysis. If motion data is filtered at 15 Hz (otherwise accelerations would be too noisy), this removes the inertial forces above 15 Hz. This means that all other forces above 15 Hz (including force plate data) must also be removed, to avoid artifacts. If you don't filter the raw data at all, but filter the final result at 15 Hz, as Allen suggests, this is practically equivalent and you would get almost the same result.
My 1996 simulation study (http://isbweb.org/data/invdyn/csb96.pdf) showed that for some variables (e.g. ankle reaction force) it is better to filter force data at a higher frequency (56 Hz) than motion data, which reflects the fact that the inertial forces are much smaller in the foot. So there is some potential to do better than simply using one filter for everything, but this will require doing the filtering on raw data (not on the final result), and these various filter settings will be hard to optimize, unless you already know the correct forces and moments (as in my simulation study).
Ton van den Bogert
Leave a comment:
-
Re: Cutoff frequency for doing inverse dynamic
I would recommend not filtering 3D coordinate data at all and only filter force plate data to remove high frequency noise. Instead use a least squares cluster design (6 or more markers per segment) and filter the calculated joint angle data, resultant joint forces and moments. The least squares approach does act as a low pass filter for high frequency noise but more importantly the RMS error calculated between local and global coordinates of segment markers can be used to screen, identify and remove or correct errors in marker coordinates. Including marker reconstruction, tracking and identification errors. These errors cannot be addressed if the raw 3D coordinate data has been smoothed.
If the 3D coordinate data were to be filtered by a low pass filter, reflecting the maximum frequency of interest, then it is not correct to filter the force data at the same frequency (as has been suggested both at 15Hz). For a simple periodic arm movement the frequency of changes in velocity will be twice and acceleration four times the frequency of the position data. Therefore if acceleration data were to be filtered it should be filtered at a frequency at least four times that used for the position data. Similarly, force plate data should also be filtered at a frequency at least four times the filter used for the position data, if filtering were to be used. Foot impacts of running or jumping are at far higher frequencies and the limitation is the maximum frequency of the 3D video system (200-250Hz). While collection of force data is not usually an issue and is routinely collected at 1,000 Hz regardless of the movement but can be pushed higher for impacts as you have done. For impacts filtering either position or force data with a 15Hz low pass is far too low and would not be appropriate.
Leave a comment:
Leave a comment: