tpribanic57

10-10-2005, 11:47 PM

Dear All,

very often one validates(compares) two different methods of some kind. In its perhaps simplest form one would calculate differences for each method between some known ground truth and value given by the proposed method (model). Then by calculating certain statistical charchestic of your data, such as rms, mean error etc., the conclusion would be drawn which method is more accurate. In addition, to simply expressing mean values for each method some people demand to go at least one step further. For example, they argue it is not sufficient to only say that one value (mean error) is smaller then the other, but it is also necessary to run down further statistical(probability) tests which will determine if the obtained difference is statistically significant etc. In other words stated hypotheses about two methods validation(comparison) has to be tested.

What would be the correct strategy when one would like to compare several different camera calibration methods (and/or different 2D systems) in terms of 3D reconstruction accuracy. Usually, I have seen works where people would simply find (mean) differences between reconstructed and ground truth known positions, lengths (static tests) and/or velocities, accelerations (dynamic test). Do we need to employ here also for instance things like t-test to determine if the difference between two methods(systems) is statistically significant? If so, is it enough to perform only one time calibration for each system(method) or multiple and how to then combine results of multiple calibrations(reconstructions)? Is it possible to argue that single time calibration is representative enough and work solely on it? What would be test for that?

Thank you all for your input, the summary will follow.

Best,

Tomislav Pribanic, M.Sc., EE

Department for Electronic Systems and Information Processing

Faculty of Electrical Engineering and Computing

3 Unska, 10000 Zagreb, Croatia

tel. ..385 1 612 98 67, fax. ..385 1 612 96 52

E-mail : tomislav.pribanic@fer.hr

very often one validates(compares) two different methods of some kind. In its perhaps simplest form one would calculate differences for each method between some known ground truth and value given by the proposed method (model). Then by calculating certain statistical charchestic of your data, such as rms, mean error etc., the conclusion would be drawn which method is more accurate. In addition, to simply expressing mean values for each method some people demand to go at least one step further. For example, they argue it is not sufficient to only say that one value (mean error) is smaller then the other, but it is also necessary to run down further statistical(probability) tests which will determine if the obtained difference is statistically significant etc. In other words stated hypotheses about two methods validation(comparison) has to be tested.

What would be the correct strategy when one would like to compare several different camera calibration methods (and/or different 2D systems) in terms of 3D reconstruction accuracy. Usually, I have seen works where people would simply find (mean) differences between reconstructed and ground truth known positions, lengths (static tests) and/or velocities, accelerations (dynamic test). Do we need to employ here also for instance things like t-test to determine if the difference between two methods(systems) is statistically significant? If so, is it enough to perform only one time calibration for each system(method) or multiple and how to then combine results of multiple calibrations(reconstructions)? Is it possible to argue that single time calibration is representative enough and work solely on it? What would be test for that?

Thank you all for your input, the summary will follow.

Best,

Tomislav Pribanic, M.Sc., EE

Department for Electronic Systems and Information Processing

Faculty of Electrical Engineering and Computing

3 Unska, 10000 Zagreb, Croatia

tel. ..385 1 612 98 67, fax. ..385 1 612 96 52

E-mail : tomislav.pribanic@fer.hr