Results for method agreement Sample Clauses
Results for method agreement. Tables 2-3 present the results at significance level 0.05. It shows that the estimates of β1 − β2 are 0.0276 under Model 1; 0.5706 under Model 2, which are close to the true values 0 under Model 1 and 0.6 under Model 2, respectively. Under Model 1, we do not reject the null hypothesis β1 = β2 with p-value 0.8739 . Under Model 2, the p-value for testing (2.4) is 0.0010, which shows a significant difference between β1 and β2. All these testing results agree with the underlying truth. The ▇▇▇▇▇-▇▇▇▇▇▇ diagrams on both the latent scale and the probability scale are shown in Figures 1a - 1d. As we can see in all diagrams, the points are around zero in a random manner, so there is no systematic pattern. The Table 2: Estimation for the difference between methods, β1 − β2. Model Estimate SE P-value 95% CI Model 1 0.0276 0.1731 0.8739 (−0.3211, 0.3764) Model 2 0.5706 0.1627 0.0010 (0.2437, 0.8975) σ2 σ2 0.7578 0.1804 0.1511 0.0770 Model 1 σ2 0.4886 0.1707 σ2 σ2 0.7179 0.1842 0.1424 0.0835 Model 2 σ2 0.3756 0.1316 γ α1 α2 γ α1 α2 dashed line shows the mean difference, and the dotted lines indicate the 95% limits of agreement. On both the latent scale and the probability scale, about 95 out of 100 points lie within the 95% limits of agreement. Under Model 1, the mean difference is 0.04, which is close to the true difference 0. Under Model 2, the mean difference is 0.57, close to the true difference 0.6 and also nearly the same as the difference 0.5706 estimated by the GLMM shown in Table 2. The ICCs for two measuring methods are calculated based on (2.10) and estimates of variance components from Table 3. Table 4 summarizes the results of the ICCs. The true ICCs for Methods 1 and 2 are given by 0.9 and 0.8182, respectively. Therefore, the simulated results in Table 4 are close to the true values of the ICCs for two measuring methods. We also investigate the performance of our implementation of ▇▇▇▇▇’▇ kappa compared to ▇▇▇▇▇ ▇▇▇▇▇’▇ kappa, that is, directly applying ▇▇▇▇▇’▇ kappa to the observed data with correlated repeated measurements. For β1 = β2, our ap-
(a) Model 1 on the latent scale.
(b) Model 1 on the probability scale.
(c) Model 2 on the latent scale.
(d) Model 2 on the probability scale.
Figure 1: The ▇▇▇▇▇-▇▇▇▇▇▇ diagrams.
